The Changing Shape of the Media in 2017

The changing ways in which we all consume media, and how the media itself needs to evolve in response to such changes, were major topics at the recent Code Conference. Executives such as Dean Baquet of The New York Times, Jeff Bewkes of Time Warner, Reed Hastings of Netflix, and Shari Redstone of Viacom and CBS each talked about how their businesses were changing.

As usual, Mary Meeker of Kleiner Perkins (above) set the stage for many of the media discussions with her annual report on Internet Trends.

What I found most interesting was the amazing growth in the amount of time people are spending on the Internet, particularly on mobile devices.

Five years ago, the average person spent less than an hour a day consuming digital media on a mobile device; today that number is more than 3 hours a day.

 Internet Advertising - Meeker - KPMG - Code 17

Mobile advertising has grown as well, and the combination of mobile and desktop Internet media consumption now accounts for more time—and more ad spend—than TV in the U.S. And, Meeker said, the same will soon be true globally. She addressed the growth of ad blocking and how the leading ad platforms are providing more ways of tracking and measuring ads, with the goal of helping advertisers to reach people with the right ad at the right time and place.

On other topics, I was most surprised by her focus on gaming, to which she credited Bing Gordon, now of Kleiner Perkins but formerly with Electronic Arts. She highlighted the growth of interactive games, and how these are now impacting learning and engagement, as well as how gaming concepts, techniques, and tools are becoming the foundation for all sorts of Internet services well beyond standard gaming.

Meeker also touched on the growth of cloud computing and its impact on enterprise computing in general; the Chinese and Indian internet markets; and health care. It’s worth a look at the whole 355-slide presentation to capture the full scope of the changes she sees.

Dean Baquet Code 17

Dean Baquet, executive editor of The New York Times, said that he thinks “news is not relevant if it’s not widely read,” and that while it is important the Times gets subscribers, it’s also important to have a “permeable” paywall, which is how the sites gets 130 million monthly readers.

Baquet talked about how the Times needs to evolve, with writers having different voices, and perhaps also adding their pictures and biographies to the stories so readers understand their backgrounds. He said everyone involved with the Times understands that the paper has changed, has to change more, and can change while maintaining its essential role. He talked about the Washington Post as a current rival, and said he wants to beat the Post on most stories, but also that he “want[s] them to succeed.” Baquet defended the paper’s recent hiring of a conservative columnist, and said he believes all newsrooms should be more diverse politically.

“I think the biggest crisis in journalism in America is the crisis in local news,” Baquet said, noting that the Times, the Washington Post, and the Wall Street Journal are doing okay, but that smaller, local papers are not. He said that we need to figure out a way to ensure that local issues, such as school boards, are covered, but he isn’t sure what the right model may be. Asked about philanthropy funding journalism, he noted that the Times is doing a project with the New Orleans Times-Picayune about the environment. In this effort, the Times-Picayune’s reporting was subsidized by a philanthropist.

Reed Hastings Code 17

Netflix CEO Reed Hastings talked about the success the company has had with TV shows, such as House of Cards and Orange is the New Black, but said it’s “just getting started” in creating its own movies. Hastings said Netflix chose to create serialized media first, because of binge watching, which the company could see on repeats of other shows, but now wants to create a wide variety of movies, from high-end to low-cost films.

Currently, movies are typically available to content distributors through windowing systems: first they appear in theaters, then on pay-per-view, and only later on platforms such as Netflix. Hastings said it is “inevitable” that this system will break down.

Hastings noted that most of Netflix’s growth is now international, and said that the firm is currently commissioning original content in more languages and from more countries, in particular citing France, Germany, Turkey, Japan, and India.

Asked about the other technology companies—Google, Facebook, and Apple—getting into the business of commissioning unique content, he said the market is “nowhere near saturation” and said the more entrants, the more work for the talent. Hastings did say that the concept of linear TV has lasted almost 100 years, but predicted that in the next 20 years “it’s all going to be on-demand.”

I was particularly interested in how he differentiated Netflix from Amazon Prime Video, saying he would rather be compared with a premium service, such as HBO, as opposed to Amazon, which wants to be very broad. Hastings doesn’t think Netflix would consider ad-supported content as opposed to subscription-supported offerings, saying companies like Facebook and Google are better suited for ad-supported models.

Hastings acknowledged that “[Netflix is] not trying to meet all the needs” of its customers, and noted that customers watch other video as well; he mentioned sports as a category that’s “hard to transform” because it typically doesn’t have much afterlife or binge viewing.

A lot of the discussion focused on “net neutrality,” and Hastings acknowledged that Netflix isn’t playing the lead role in the conversation that it did a few years ago. “We think net neutrality is incredibly important,” for society, innovation, and entrepreneurs, he said, but added that Netflix is now so big that net neutrality is no longer its “primary battle.”

Jill Soloway Code 17

Jill Soloway, creator of Amazon Prime TV shows Transparent and I Love Dick, said she first pitched Transparent to many other channels and wasn’t sure what to expect. Amazon offered to give her back the pilot episode if they didn’t pick it up as a series, and also mentioned that Amazon didn’t give her as many notes on shows as other TV networks did. Amazon and others are now more open to the idea of a flawed female lead, she said.

Soloway talked about how she discussed with Amazon founder Jeff Bezos how stories such as Transparent can have more impact on the culture than politics. Soloway’s new company, called Topple, aims to “topple the patriarchy,” as well as to produce more content by women, people of color, and those with different voices.

Jeff Bewkes Code 17

Defending the company’s plan to be acquired by AT&T, Time Warner CEO Jeff Bewkes said that combining Time Warner’s content with AT&T’s abilities and retail distribution will enable the integrated organization to move faster, as we move into a new era for video. He said the acquisition is in the normal Department of Justice review; he doesn’t think the change in administrations will impact it going forward.

Bewkes talked about the big changes in media over the past five years, mentioning mobile video, broadband video, improved at-home video choices, and better navigation. He said, “we tried to do ‘TV everywhere’ 7 years ago,” but it didn’t work because the technology wasn’t there.

Bewkes said Time Warner had content and distribution together for 20 years (because of its former ownership of Time Warner Cable), but noted they spun it off “because it wasn’t national.” He said it only covered 12 percent of the country, while AT&T, through its DirecTV and Mobile businesses, has a national footprint and national competitors. “It has to be everywhere,” Bewkes said, emphasizing that with national mobile, broadcast, and video distribution, the company will have direct information about consumers across the country.

While Bewkes said the combined company could offer new products faster, he said it wouldn’t offer content that isn’t also available on other platforms. One advantage is that if you have distribution with direct retail data, you know who is watching when you launch new products.

“We think there ought to be more innovation and more competition,” Bewkes said, noting that Time Warner invented TV supported by subscribers with HBO, and also has sites that are supported by a combination of subscriptions and advertisements such as TNT, FX, and CNN.

On the subject of net neutrality, he said: “nobody seems to agree on what you are talking about.” Bewkes said it doesn’t make sense to have the FCC placing data restrictions on telecommunications firms, while the FTC has made less of an issue about privacy when it comes to digital companies. There’s “no reason that Google and Facebook should have more lax restrictions on data use.”

Shari Redstone Code 17

Shari Redstone, vice chair of CBS and Viacom, acknowledged that last fall she had backed a merger of CBS and Viacom, but has since changed her mind. “They’re both stars,” she said, and since the change in management at Viacom—now under CEO Bob Bakish—she sees much more “energy” at that company. It became apparent to her that assets were undervalued at Viacom, and that a merger would have hurt the momentum of Viacom.

Redstone said it’s now a great time to do deals because content is more valuable than ever before. Although she acknowledged that the Pay-TV model is challenged, it is still a big business. “We have to create great content on multiple platforms,” she said, designed for both linear TV and as short-form digital content.

Redstone also discussed CBS, where CEO Les Moonves just signed another contract. Asked about the value of sports with NFL ratings down, she sees sports as very important, and part of the success of the network going forward. “People want exclusive content,” she said. She believes the ratings for the NFL were down because things “got very confusing” for consumers last year, with too many networks and social issues, and stated she has “full confidence in the NFL.” In the future, it will not just be about content, but about the experience around the content, she said.

Redstone, who is also a co-founder and managing partner of Advancit Capital, explained that the venture capital firm invests in early-stage ventures like digital measurement company Moat (recently acquired by Oracle), video suppliers like Maker Studios (recently acquired Disney), and a number of augmented reality firms, while Viacom and CBS look at later stage investments. Redstone said that the most value she can add to the larger companies is an understanding of what is going on in that world. She said her biggest mistake as a VC was passing on Twitch.

http://www.pcmag.com/article/354296/the-changing-shape-of-the-media-in-2017

Continue Reading

Ballmer, Case, and Rubin Talk Technology at Code

At the Code Conference, media and politics took the center stage, as opposed to technology, but I did hear some interesting thoughts from a few executives who run, or have run, important technology firms, including Steve Ballmer, Steve Case, and Andy Rubin.

Rubin, founder and CEO of Playground Global and best known as the former leader of the team that created Android, discussed the new Essential Phone, which had just been announced earlier that day. He noted that Playground is both a venture fund and a design studio, and backed Essential.

The Essential phone will run stock Android and offer a 5.7-inch screen, yet remain the size of a normal phone because it has less bezel (though that isn’t all that unusual these days—the Samsung Galaxy S8 and LG G6 are other examples.) What makes Essential more unusual is the titanium frame and the ceramic back, and the accessory bus which uses two pins on the back where you can attach accessories such as a 360-degree camera.

Because the accessories connect with the phone wirelessly—the clips on the back of the phone are for alignment and power only—Rubin said this lets the company “future-proof future phones.” This, he said, will allow for continuous innovation, outside of the 24-month innovation cycle typical for today’s smartphones.

Rubin also talked about a future Home product, which includes a speaker, 5.6-inch round screen, and the firm’s own personal assistant software. For this, the goal is to build bridges that enable the device to control all of the items in your home, rather than tying you to a single ecosystem. Rubin talked about having the device support SmartThings, HomeKit, Thread, Weave, Android Wear, or any other of the various home IoT ecosystems, and do this securely and privately, though they have not yet demonstrated the assistant software that would do this.

LA Clippers basketball team owner Steve Ballmer, best known as the former CEO of Microsoft, spent most of his interview talking about the recently-launched USA Facts website, which Ballmer funded. His goal is to produce a site with accurate numbers about government revenues and spending in order to facilitate better discussions, and produce the equivalent of a 10-K financial report for the government. He was surprised by the amount of interest in the current site, which has a very small staff.

Ballmer noted that it takes “quite a bit of machinery to keep it up to date” and he would like to see the site expand to include things such as state governments. He noted that many areas, such as education, are funded at the federal, state, and local levels, which makes it difficult to gather comprehensive information. Ballmer thinks it is important that the numbers tell a story, but he wants the site to focus on actual current spending numbers and historical numbers, and avoid predictions.

Ballmer didn’t talk much about Twitter, where he is a major investor, other than to say he thinks there’s a real opportunity to make Twitter a relevant economic asset, and that he’s proud to be associated with it. That goes back to his “investor phase,” which he said is over, as he chooses to focus on his Microsoft share and the Clippers.

On Microsoft, Ballmer said he was “too slow to recognize [the] need for new capabilities, particularly in hardware.” The company saw that the new “expression of software” was in hardware, but still needed to change its business model and delivery model. With Windows Phone, Ballmer said, Microsoft tried to use the same techniques it had used on Windows, even though “the same techniques were never going to get us there.” On the other hand, Microsoft did cloud right, Ballmer said, and added that he believes current CEO Satya Nadella is doing a good job.

I was interested in Ballmer’s view on technology and sports, and he talked about how having multiple cameras in a basketball game can help people understand the different kinds of pick-and-rolls and defenses, leading to things such as diagraming plays in real time, as well as showing the impact on fantasy leagues and synthesizing the view from a player’s perspective. Some of this will be available from a firm called Second Spectrum, which Ballmer said should be “in beta” for the 2018 season, and he hopes to have it live the following season.

Steve Case Code 17

Steve Case, CEO of Revolution and probably best known as the co-founder and former CEO of AOL, spent much of his time talking about the importance of encouraging innovation in less traditional parts of the country, a theme discussed in his book The Third Wave.

Case calls this theme the “Rise of the Rest” and noted that 75 percent of venture capital funding goes to companies in three states—California, New York, and Massachusetts. “People feel left out and left behind because people have been left out and left behind,” Case said, noting that the disparity in investments is one reason people in other parts of the country feel disadvantaged economically.

But Case said investing in other cities and states is not only the right thing for the country, since it would create a more level playing field, but also makes sense for investors. He argues that the “third wave” of the Internet requires more expertise in particular industries, which are more distributed around the country. Case counts the early online companies focused on making communications happen, such as AOL, as the first wave, and said these companies were located all over the country, noting that AOL was in Virginia, Hayes—which made modems—was in Atlanta, and that Dell was in Austin. He said the second wave consisted of companies that built services on top of the Internet, such as search and social media—i.e., Google and Facebook and apps such as Snapchat and Instagram, and these were mostly based in Silicon Valley. Case says we are now entering the Third Wave, where companies will transform industries such as education, health care, and agriculture, and these companies could be anywhere, as there are more large companies and more business in the middle of the country than in the three states with large VC investments. He said he is in the middle of a tour—also called “Rise of the Rest”—in an effort to create more of a network effect, by bringing media attention and investor attention to other areas of the country.

In addition, Case said, valuations tend to be larger in Silicon Valley, so there is an advantage for investors in other markets. He noted some recent successes, such as pet supply site Chewy, which is based in Ft. Lauderdale and was recently sold for $3.3 billion; and ExactTarget, which is based in Indianapolis and was acquired by Salesforce. ExactTarget has become Salesforce’s “Marketing Cloud,” while significantly growing in Indiana.

Case said that over the last three decades, all of the net growth in jobs has come from startups, and he talked about how there is now a global battle for talent. While America has led the way, he said we are now seeing a “globalization of entrepreneurship” which could be a threat to America’s leadership. In addition, he noted that 90 percent of venture capital went to companies headed by men, and only 1 percent went to African-Americans, and that this needs to change. Case sounded optimistic and pointed to a number of recent startups and technology movements in places such as Baltimore, Detroit, Arizona, and Indianapolis. Overall, he said, we need to “celebrate Silicon Valley, but also spread the love to other places.”

Also at the show, I heard technology discussions from Marc Andreesen and Reid Hoffman, and from Intel CEO Brian Krzanich.


Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/354259/ballmer-case-and-rubin-talk-technology-at-code

Continue Reading

Krzanich Says Intel is Not A CPU Company Anymore

“We don’t think of ourselves as a CPU company anymore,” Intel CEO Brian Krzanich said yesterday during his interview at Code 2017. “We think of ourselves as a data company,” he said, making the products that will collect, do analytics, storage, and transmission of all of the data that will be generated by the many devices in the world. He said devices have to be connected to the cloud to add value, because analytics on a lot of data is what is most important.

During the interview, Krzanich touched on a variety of areas—from PCs and drones to the cloud and AI processors.

He said there have been a lot of innovations in the PC market over the past several years, especially, pointing to improvements in ease of use, battery life. “You’re going to see some real innovation in form factors, size, usability, multiple screens, coming in the next few years,” he said, later explaining that by multiple screens, he meant a device that worked as a physical notebook replacement that could have a keyboard when you wanted it, and could switch between monochrome and color, etc. He said PC sales are nearing stability, but that Intel has been able to increase its profitability because people are “buying up” to things like the Core i7 and the recently introduced Core i9 chips. Still, he said, 60-70 percent of Intel’s profits will come from growth areas outside the PC.

Co-host Walt Mossberg asked him about the rumors that Apple is considering using its own chips (such as the ones used in iPhones and iPads) rather than those from Intel in the Mac. Krzanich said Apple is always looking for the best performance, and “I actually believe that somewhere inside that company somebody is trying to see if they use their ARM-based cores to scale up into that space. As an engineer, I think they’d be foolish not to do that test and see if they can.” But he said Intel’s job is to make its products so compelling—in terms of performance, battery life, the way it can integrate features from MacOS, and cost, so that they will continue to choose Intel. “We always look at it as a competitive market we have to win.”

In the cloud and the data center, Krzanich said, Intel thinks not about the processor itself, but about the entire server rack. He noted that Intel has more than 90 percent of market share of the computing inside data centers, whether in-house or in the public cloud or private cloud.

During the question period, I asked him about Intel’s reaction to recent AI chips, such as Nvidia’s Volta GPU and Google’s Tensor Processing Unit. “We really want to provide people with processors that can go across multiple workloads,” he said. GPUs and TPUs are good for certain workloads, but he said Intel has Atom that can handle workloads in an autonomous car, Xeon for general servers, Xeon Phi to compete with GPUs, FPGAs for video analytics, and Nervana, an AI-specific ASIC Intel recently acquired.

Every 10 or 15 years, Krzanich said, new workloads come into the computing market; an AI is such a change. The first thing that happens is people build ASIC accelerators, and then use FPGAs (field programmable gate arrays). “We’ve seen this cycle before,” he said. Intel wants to participate in the market through FPGAs, which can be easily reprogrammed; and through the chips provided by Nervana, which it thinks can compete or beat GPUs, TPUs, and other application-specific accelerators. All the products will start to get branded Nervana, he said, but you’ll have a variety of different features, costs, and energy levels, all the way down to Movidius, another company Intel recently purchased, which makes chips for drones.

On other topics, he said, Intel is interested in commercial drones, not consumer, and is particularly focused on how data is ingested and how you can apply AI to things like inspecting power lines and cell towers. He talked about the company’s new partnership with Major League Baseball and with other sports leagues to bring VR to sports, such as showing what the Super Bowl field looked like to Tom Brady. He said this involved 50 high-definition cameras at box level in the stadium that sent information back to a massive service, which converted the data to voxels, and created a complete visual model of everything that could be seen from any angle. This uses 2 terabytes a minute of data; it has also been used in the NBA and the NCAA basketball finals.

He seemed particularly bullish about autonomous driving, saying the average car today has about 80 small microprocessors designed for specific things, but that “the car of the future will be more like a server.”

This led into a discussion about Mossberg’s hope for new privacy laws that set some rules about what happens with all the data that can now be collected. Krzanich said he tended to agree that some new regulations would be needed, not so much for devices you put in your home (because you’ve chosen to do that, and presumably agreed to the terms of service), but particularly for data collected by things like cars. These will know where you drive and what speed you drive, but more importantly, in order to drive successfully, it has to look at everything and everyone, and see people on the street, license plates of other cars, etc. In those cases, privacy laws must be re-examined, he said.


Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/354082/krzanich-says-intel-is-not-a-cpu-company-anymore

Continue Reading

Clinton: ‘Weaponizing the Tech Revolution’ Cost Me the Election

At Code 2017, former Secretary of State and Democratic Presidential candidate Hillary Clinton said last year’s presidential election was “The first time you really had the tech revolution weaponized politically.”

She pointed to Russian agents, fake news sites, content farms, and bots distributing false information as major contributors to her loss, saying “The other side was using content that was flat-out false and delivering it in a personalized way.” On Facebook, she said, the vast majority of news items posted were faked, and these were connected to 1,000 Russian agents and bots.

While she said she was very proud of her campaign’s data and analytics team, they focused on better targeting and better messaging, aiming to turn out our voters and communicate with them. While she said her campaign, like all political organizations, tried to put information in the best possible context, “we did not engage in false content.”

Also contributing to her defeat, she said, was the Citizens United court decision allowing unaccountable money into the campaign, and the “effective suppression of votes” following the suspension of the Voting Rights Act.

Clinton admitted that using a personal e-mail server was a mistake, but said her use of it was responsible and not careless. Still, she said, it was “exploited very effectively for adverse political reasons,” and pointed to the reopening of the investigation into her email right before the election as the pivotal moment for her campaign.

Much of her discussion dealt with false information she said was spread by the Russians, pointing to a declassified report from 17 intelligence agencies in early January that concluded with ‘high confidence” that Russians ran an extensive information war campaign against her through paid advertising, false news sites, and agents. She said, “The forces we are up against are not just interested in influencing our elections and our politics; they are going after our economy, and our unity as a nation.”

Clinton noted that when she became the nominee, the data that the Democratic National Committee had on voters was “mediocre to poor.” Meanwhile, the Republican National Committee has raised close to $100 million between 2012 and 2016 and used that to build a foundation of data. When that was married with psychographic data from Cambridge Analytica, which resulted in the marriage of content with delivery and data, she said, you had a “potent combination.”

She also said that the Russians could not have known how to best weaponize that information unless they had been guided by some Americans. For instance, within an hour of the leaked tapes from Access Hollywood with Donald Trump talking about how he treats women, WikiLeaks leaked emails from her campaign manager and began to weaponize them. She said that after Comey’s letter about her emails came out in late October, the biggest Google searches were for WikiLeaks, and this was particularly high in Wisconsin and Pennsylvania, states she lost.

She said that we are getting more information about the contacts between Trump campaign officials and associates with the Russians before, during, and after the election, and said, “We’re going to, I hope, connect a lot of the dots.”

Asked by conference co-host Kara Swisher if she blamed Facebook and other platforms, Hillary said she wasn’t exactly sure, but that, “What was happening to me was unprecedented.” She suggested that Facebook and the other platforms need to curate content more effectively and stop fake news from creating a new reality.

Later, responding to an audience question about the impact of Twitter and other social media, Clinton said she has lots of sympathy for people trying to make the decisions to contain the weaponization of information. She would rather see the industry erring on the side of blocking information, rather than having the public overwhelmed by the fake information. (It wasn’t completely clear, but to me, that sounds a bit like endorsing some amount of censorship.)

Other topics she touched on were the continuing investigations, the Democrats’ chances of winning the House of Representatives in 2018, how women are perceived in politics, and the book she is writing.

http://www.pcmag.com/article/354043/clinton-says-weaponizing-the-tech-revolution-cost-her-the

Continue Reading

Google Apps, Tools Aim to ‘Democratize AI’

To me, the biggest theme at last week’s Google I/O conference was “democratizing AI”—in other words, making AI accessible both to end-users through its use in a variety of Google services, and to developers through new tools, programs, and even hardware designed around Google’s TensorFlow AI framework.

Google CEO Sundar Pichai started the conference out with a keynote in which he again stressed that the company was moving from a mobile-first to an AI-first approach, similarly to what he said last year.

He said Google was “rethinking all our products and applying machine learning and AI to serve user’s problems.” He said machine learning algorithms already influence the ranking of different results in search, and how Street View now automatically recognizes signs. Other services are getting smarter because of AI, he said, such as how the Google Home now supports multiple users and how Gmail is now rolling out a “smart reply” feature where it automatically suggests responses to emails.

To that end, he made a number of announcements of AI products, both for consumers and for developers.

Lens, Assistant, and Photo use AI features

For end-users, the most visible of these new efforts is Google Lens, a set of vision-based computing capabilities that can understand what you are seeing and take action, both in the Google Assistant and in Google Photos.

For instance, he demonstrated how you can take a picture of a flower, and how Google Lens can now identify it. More prosaically, it can take a picture of a username and password for Wi-Fi, and then automatically understand that you want to connect and do that for you. Other examples include taking a picture of the outside of a restaurant and having the software understand what it is, then showing you user reviews and menus. This isn’t all completely new, but I can imagine that it will be quite useful—the kind of thing we’ll all be using pretty much by rote in a few years. Google says this will be rolling out in a few months.

Google Assistant continues to get smarter and will incorporate the Google Lens, though the biggest news on that from is that Assistant is now coming to the iPhone.

The popular Google Photos app is also getting a number of other new AI-driven features, including “suggested sharing,” where it will automatically select the best pictures and suggest you share them with the people in the photos. Google Photos is also adding a feature that automatically will let you share all or part of your library, so that if you take photos of your kids, they automatically become part of your partner’s photo library as well. And it can suggest the best photos for a photo book.

AI-First Data Centers and New Development Tools

On the internal side, Pichai talked about how the company was “rethinking” its computational architecture to build “AI-first data centers.” He said Google uses its current Tensor Processing Units (TPUs) across all its services, from basic search to speech recognition to its AlphaGo competition.

I was particularly intrigued by the company’s introduction of a new version of its TPU 2.0, which Pichai said was capable of reaching 180 teraflops (180 trillion floating point operations per second) per 4-chip board, or 11.5 petaflops in each “pod” of 64 such boards. These are available to developers as “cloud TPUs” on the Google Cloud Engine now, and the company said it would make 1000 cloud TPUs available to machine learning researchers via its new TensorFlow Research Cloud.

This is part of an increasing push on TensorFlow, the company’s open source machine learning framework for developers, and the conference had a variety of sessions aimed at getting more developers to use this framework. TensorFlow appears to be the most popular of the machine learning frameworks, but it’s only one of a number of choices. (Others include Caffe, which is pushed by Facebook, and MXNet, pushed by Amazon Web Services.)

I went to a session on “TensorFlow for Non-Experts” designed to evangelize the framework and the Keras deep learning library, and it was packed. It’s fascinating stuff, but not as familiar as the more traditional development tools. All the big companies say they are having trouble finding enough developers with machine learning expertise, so it’s no surprise to see all of them pushing their internal frameworks. While the tools to use these are getting better, it’s still complicated. Of course, just calling an existing model is much easier, and Google Cloud Platform, as well as Microsoft and AWS, all have a variety of such ML services developers can use.

Because developing such services is so hard, Pichai spent a lot of time talking about “AutoML,” an approach that has neural nets designing new neural networks. He said Google hopes that AutoML will take an ability that a few PhDs have today and will make it possible for hundreds of thousands of developers to design new neural nets for their particular needs in three to five years.

This is part of a larger effort called Google.ai to bring AI to more people, with Pichai talking about a variety of initiatives to using AI to help in health care. He talked about pathology and cancer detection, DNA sequencing, and molecule discovery.

Continuing the theme, Dave Burke, head of Android engineering, announced a new version of TensorFlow optimized for mobile called TensorFlow lite. The new library will allow developers to build leaner deep learning models designed to run on Android smartphones, and he talked about how mobile processor designers were working on specific accelerators in their processors or DSPs designed for neural network inferencing and even training.

Fei Fei Li Google IO 2017

In the developer keynote, Fei Fei Li, a Stanford professor who heads Google’s AI research, said she joined Google “to ensure that everyone can leverage AI to stay competitive and solve the problems that matter most to them.”

She talked a lot about “Democratizing AI,” including the various tools Google makes available to developers for specific applications, such as vision, speech, translation, natural language, and video intelligence, as well as making tools for creating your own models, such as TensorFlow, which is easier to use with more high-level APIs.

She talked about how developers will now be able to use CPUs, GPUS, or TPUs on the Google Compute Engine. She gave an example of how much of a speed improvement some models have running on TPUs, saying the research implications of this are significant.

Echoing Pichai, she touted the new TensorFlow Research Cloud, saying students and Kaggle users should apply to use it; and concluded by saying the firm created its cloud AI team to make AI democratic, to meet you where you are, with Google’s most power AI tools, and to share the journey as you put these to use.

http://www.pcmag.com/article/353992/google-apps-tools-aim-to-democratize-ai

Continue Reading

Andreesen and Hoffman at Code: ‘We Don’t Have Enough Change’

Investors Marc Andreesen and Reid Hoffman talked about differing views on productivity, investments, “fake news,” and the role of social media in a wide-ranging discussion on the first day of this year’s Code Conference. Andreesen challenged the conventional wisdom that technology is destroying jobs, while Hoffman was more worried about the transition to new jobs. He was more focused on systems that help people discern what is real and what is fake on social media, while Andreesen mostly dismissed that as an issue.

Both men are very successful technology investors with big roles in social media. Andreesen co-founded Netscape, runs Andreesen Horowitz, serves on the boards of Facebook, and is a prominent investor at Lyft. Hoffman co-founded LinkedIn, is a partner at Greylock Partners, and recently joined the board of Microsoft.

Andreesen said we now have two different kinds of economies. In fast-changing sectors like retail, transportation, and media, he said, we are seeing a huge role for software (echoing his comments a few years ago about how “software is eating the world”), massive productivity improvements, and a gigantic change in jobs. These sectors are marked by rapidly falling prices, and it is here that the concern about job loss is most real.

But he said there is also a “slow change” part of the economy, including healthcare, education, construction, elder care, child care, and government. Here he said, “we have a price crisis,” noting that almost all of the price increases we have seen in the past few years have been in education, health care, and construction. These are areas where technology is having almost no impact, and where we’re seeing almost no productivity growth. Left unchecked, those areas are “eating the economy.”

He divided his looking at investments into those two buckets, and said the opportunity and the challenge is “to figure out how to have a much bigger impact on the slow change parts of the economy,” with active investments in education and health care. He noted these areas are highly regulated, so not easy to disrupt, but said the opportunity exists to drive down prices.

Hoffman viewed the world a bit differently, splitting his investments into two areas. The first is businesses with network effects, such as Airbnb and Convoy (which he described as “Uber for trucking”). The second is areas that are contrarian, in that they focus on technologies that are not in the buzz cycle—not things like AI or virtual reality. These include construction robotics and energy sources, hinting that one of the companies was working on fusion energy.

Andreesen noted that in “so-called AI,” including machine learning and sensors, “something dramatic really tipped about five years ago.” He said this was following the classic model of Silicon Valley, saying “of course, we’re going to overinvest” in those areas. Most companies in these areas won’t work, he said, but the ones that do will become very successful.

Conference co-host and moderator Kara Swisher asked the two of them if they worried about the job impact of these technologies, and that led to an interesting discussion about jobs and productivity.

Hoffman said that he viewed platforms like LinkedIn as a way to help people get the right skills and the right connections, and said that things like autonomous vehicles would let people get to work in a more easy way and be more productive.

Andreesen called the concept that technology would replace jobs the “Luddite fallacy” that comes up every 25 to 50 years. He noted that the same issue came up when the automobile was invented and that jobs were lost for blacksmiths and others who took care of horses. But the car created a lot more jobs—not only building cars, but in “second-order” effects, such as paved streets, restaurants, hotels, motels, movie theaters, apartment complexes, office complexes, and suburbs. He said the self-driving car can improve productivity for people in the car and can save lives, and that it would have other impacts, including possibly a huge construction boom in areas outlying the big crowded cities.

He noted that unemployment numbers are very low, and claimed that we have six million job openings and that in many places, “we don’t have enough workers.”

Hoffman responded that many people will need different kinds of jobs, and said that “transitions can be very painful.” In general, he said, we should “try to make it work out in a way that’s more humane.”

I was pleased to see Andreesen point out that counter to most of the prevailing beliefs in the technology industry, productivity growth is at a generational low, not a high; that the rate of job creation and destruction has been declining for 40 years; that people are actually staying in jobs longer, not shorter, than they used to; and that we’re seeing a decrease in the number of new companies in existing industries. “We have the opposite problem. We don’t have enough change,” he said.

During the question period, I asked if the massive amounts of time that people are spending on social media is impacting productivity at work. Hoffman said he didn’t see that this as an issue, though Swisher seemed incredulous at his answer. Andreesen referred to a recent article by Noah Smith of Bloomberg on the topic, saying that might explain the generational decline in productivity. He didn’t really give an opinion but joked that maybe if it was slowing down productivity, it was good because it was slowing down job churn.

Andreesen and Hoffman were preceded on stage by Tristan Harris, a former design ethicist at Google, who gave a short but impassioned speech on how social media and internet technologies are “steering the thoughts” and beliefs of 2 billion people. Harris complained that the “the attention economy” is moving conversation and changing both belief and behavior, saying that a Facebook newsfeed can inadvertently prefer an outraged news feed than a calm news feed because more people will click on it. New technology, such as the audio matching algorithms from Lyrebird’s ability to copy a voice will undermine our ability to understand what is fake. “Our mind is being hijacked,” he said.

Harris compared the “runaway AI” we already have to the invention of the nuclear bomb, and said we need to make fundamental changes—such as needing different accountability mechanisms instead of advertising—to stabilize the world as a result.

Not surprisingly, both Andreesen and Hoffman disagreed strongly, with Andreesen saying Harris’ thoughts reflect the “reality privilege” that elites have, and that most people don’t have better experiences away from the internet. Hoffman said we can correct the commercial system biases.

The two disagreed on the role of social media and “fake news.” Hoffman said he was focusing on things like discerning what are facts, and said he is doing a lot of thinking about building systems that would have more trust. He said we had presumed that most people could make discern the truth, but now we need to think about how we help people figure out better guideposts to the truth.

Andreesen said that “truth” has become a shorthand for things that people on the coasts believe, noting that if you read the mainstream press, you would have thought that Hillary Clinton would have been elected over Donald Trump, but that “If you wanted the truth, you should have read Brietbart.” He said, “we all need to take a step back on the idea that we have absolute truth.”

Hoffman and Zynga co-founder Mark Pincus created a left-leaning political group called Win the Future to promote social responsibility and be pro-business, he said; noting that he was concerned about how Silicon Valley “problem solvers” can be inventive to solve the problems that people are facing, including fake news. But he agreed that charges of fake news can be leveled both ways, saying it results in an “erosion of institutions” and that the two sides need to be able to talk to one another. “Without that, we don’t have democracy,” he said. He echoed a thought from Microsoft COO Brad Smith about how we get to a “Geneva Convention in Cyber.”

Both adamantly said they were not running for office.

http://www.pcmag.com/article/354005/marc-andreesen-and-reid-hoffman-at-code-we-dont-have-enou

Continue Reading

Google Cloud TPUs Part of a Trend Towards AI-Specific Processors

In the last few weeks, there have been a number of important introductions of new computing platforms designed specifically for working on deep neural networks for machine learning, including Google’s new “cloud TPUs” and Nvidia’s new Volta design.

To me, this is the most interesting trend in computer architecture—even more than AMD and now Intel introducing 16-core and 18-core CPUs. Of course, there are other alternative approaches, but Nvidia and Google are deservedly getting a lot of attention for their unique approaches.

At Google I/O, I saw it introduce what a “cloud TPU” (for Tensor Processing Unit, indicating that it is optimized for Google’s TensorFlow machine learning framework). The previous generation TPU, introduced at last year’s show, is an ASIC designed primarily for inferencing—running machine learning operations—but the new version is designed for inferencing and training such algorithms.

In a recent paper, Google gave more details on the original TPU, which it described as containing a matrix of 256-by-256 multiple-accumulate (MAC) units (65,536 in total) with a peak performance of 92 teraops (trillion operations per second). It gets its instructions from a host CPU over PCIe Gen 3 bus. Google said this was a 28nm die that was less than half the size of an Intel Haswell Xeon 22nm processor, and that it outperformed that processor and Nvidia’s 28nm K80 processor.

The new version, dubbed TPU 2.0 or cloud TPU, (seen above), actually contains four processors on the board, and Google said each board is capable of reaching 180 teraflops (180 trillion floating point operations per second). Just as importantly, the boards are designed to work together, using a custom high-speed network, so they act as a single machine learning supercomputing that Google calls a “TPU pod.”

This TPU pod contains 64 second-generation TPUs and provides up to 11.5 petaflops to accelerate the training of a single large machine learning model. At the conference, Fei Fei Li, who heads Google’s AI research, said that while one of the company’s large-scale learning models for translation takes a full day to train on 32 of the best commercially available GPUs, it can now be training to the same accuracy in an afternoon using one-eighth of a TPU pod. That’s a big jump.

TPU Pod

Understand that these are not small systems—a Pod looks to be about the size of four normal computing racks.

TPU board

And each of the individual processors seem to have very large heat sinks, meaning the boards can’t be stacked too tightly. Google hasn’t yet given a lot of detail on what has changed in this version of the processors or the interconnect, but it’s likely this as well is based around 8-bit MACs.

Nvidia Tesla V100

The week before, Nvidia introduced its latest entry in this category, a massive chip known as the Telsa V100 Volta, which it described as the first CPU with this new Volta architecture, designed for high-end GPUs.

NVIDIA Telsa V100

Nvidia said the new chip is capable of 120 TensorFlow teraflops (or 15 32-bit TFLOPS or 7.5 64-bit ones.) This uses a new architecture that includes 80 Streaming Multiprocessors (SMs), each of which includes eight new “Tensor Cores” and is a 4x4x4 array capable of performing 64 FMA (Fused Multiply-Add) operations per clock. Nvidia said it will offer the chip in its DGX-1V workstations with 8 V100 boards in the third quarter, following the firm’s earlier DGX-1 that used the earlier P100 architecture.

The company said this $149,000 box should deliver 960 teraflops of training performance, using 3200 watts. Later on, the first said, it would ship a Personal DGX Station with a four V100s, and in the fourth quarter, it said the big server vendors will ship V100 servers.

This chip is the first announced to use TSMC’s 12nm processor, and it will be a huge chip with 21.1 billion transistors on 815 square millimeter die. Nvidia cited both Microsoft and Amazon as early customers for the chip.

Note there are big differences between these approaches. The Google TPUs are really custom chips, designed for TensorFlow applications, while the Nvidia V100 is a somewhat more general chip, capable of different kinds of math for other applications.

Meanwhile, the other big cloud providers are looking at alternatives, with Microsoft using both GPUs for training and field-programmable gate arrays (FPGAs) for inferencing, and offering both to customers. Amazon Web Services now make both GPU and FPGA instances available to developers. And Intel has been pushing FPGAs and a host of other techniques. Meanwhile, a number of new start-ups are working on alternative approaches.

In some ways, this is the most drastic change we’ve seen in workstation and server processors in years, at least since developers first started using “GPU compute” several years ago. It will be fascinating to see how this develops.

http://www.pcmag.com/article/353984/google-cloud-tpus-part-of-a-trend-towards-ai-specific-proces

Continue Reading

AMD Introduces Epyc Server Chip, 16-Core Desktop Chip

At this week’s financial analyst meeting, AMD unveiled a 16-core, 32-thread desktop processor called Ryzen Threadripper—its new Epyc brand for server chips—and introduced its first graphics board aimed at the machine learning market.

But I was also glad to see the company unveil a roadmap of successive generations in its CPU, GPU, and server lines, with migration to 7nm and 7nm+ process nodes through 2020, which is crucial for the firm to regain credibility with business buyers. At this point, competitors Intel and Nvidia dominate their markets, particularly on the server side, and business buyers need to be convinced that AMD will be a long-term player in order for it to land on the consideration list.

“Immersive and instinctive computing will transform all of our daily lives,” AMD CEO Lisa Su said at the conference when defining the company’s vision for the future. Immersive computing with high-end graphics is all around us, but instinctive computing—which involves using huge amounts of data and machine learning algorithms—is just beginning to evolve. All of this requires “high-performance computing,” she said, a term she used to describe all sorts of high-end computing and graphics, not just the HPC or supercomputing market.

Su talked at length about investing for “multi-generational leadership” in x86 CPUs, graphics for both PCs and integrated, and software—a big change for a company whose primary products have targeted the mainstream or low-end markets. AMD will now focus on premium products, she said, and noted that while the mainstream accounts for most of its units, the premium part of the market accounts for most of its revenue and profits.

Her biggest announcement was probably Epyc, the branding for the new line of server chips, which had been codenamed Naples. Su said that the Zen architecture, which debuted in the Ryzen desktop chips was “created with the new data center in mind.” And while she is enthusiastic for what Zen can bring in the desktop and laptop markets, she is even more excited about what Zen can do in the data center market. She talked about how AMD has also targeted the data center market through Radeon Instinct, a version its next-generation graphics architecture known as Vega—which will provide 25 teraflops of performance—and how these would work together in a vision of heterogeneous computing.

“Today’s data center really requires heterogenous computing to be successful,” she said, describing AMD as the only provider of both high-performance computing and graphics.

AMD's Mark Papermaster

CTO Mark Papermaster gave more detail about the glue that will hold the new chips together, as well as the firm’s process for designing new chips in a way that will “provide sustained innovation going forward.” Papermaster described the major features of the Zen and Vega architectures for CPUs and GPUs, most of which had been described previously, and said the firm has to design “not only for performance, but also for efficiency.”

AMD Infinity Fabric

Tying the chips together is the firm’s new Infinity Fabric, which connects CPUs, GPUs, memory controllers, and other features within a chip and between chip sockets. Papermaster called Infinity Fabric a “hidden gem,” and explained how it includes a control fabric that manages sensors on the chip; can regulate performance and security; and can also work as a data fabric, moving data between the various parts of the system.

He said this allows for “near perfect scalability” through 64 CPU cores. AMD is also supporting some new industry standard interconnects between systems, known as Gen-Z & CCIX, as it pushes for open standards (since it doesn’t have the heft of Intel in the market).

Papermaster said that one big challenge for the firm over the next few years will be “defying the slowing of Moore’s Law.” He said that through integration, software, and system design it would be able to stay at the pace of generational performance improvement, even without frequency improvements.

AMD x86 Roadmap

AMD graphics roadmap

To that end, Papermaster showed roadmaps for both the graphics and CPU lines through 2020, and said that the team is not only rolling out the current generation of products, but working now on the next two. The plans show moving to 7nm and 7nm+ production modes, with continuing improvements in both raw performance and performance per watt. While these weren’t detailed, they were great to see.

A 16-core, 32-Thread CPU and a GPU Aimed at Machine Learning

AMD Ryzen Rollout

Jim Anderson, General Manager of the Computing and Graphics group, said that in addition to the Ryzen 7 and Ryzen 5 products already announced, a lower-end Ryzen 3 will ship in the third quarter. More importantly, all five of the top PC OEMs (Acer, Asus, Dell, HP, and Lenovo) will have Ryzen desktops available for consumers in the market by the end of the quarter. Commercial desktops should follow in the second half of this year, he said.

For those interested in the “absolute highest performance,” Anderson announced a new version called Ryzen Threadripper with 16 cores and 32 threads coming “this summer.”

Anderson said that a mobile version of the Ryzen processor, with on-die Vega graphics, will be available for consumer systems in the second half of 2017. A commercial version should follow in the first half of 2018. He said the mobile chip will offer 50 percent more CPU performance and 40 percent more GPU performance while using 50 percent less power compared to the company’s current seventh-generation APU.

Raja Koduri, Chief Architect for the AMD Radeon Technologies Group

Meanwhile, in the graphics world, Raja Koduri, Chief Architect for the Radeon Technologies Group, discussed the company’s plans for a new series of graphics boards based on the new Vega architecture. Koduri noted that the company’s current Polaris architecture GPUs mostly address the mainstream and mid-market segments of the graphics market (with graphics boards under $300), and acknowledged that the company didn’t play at the top end.

This will change with the new Vega architecture. Among the features Koduri described were a new high-bandwidth cache controller (which can double or quadruple the available memory), a new programmable geometry pipeline, Rapid Packed Math (for 16-bit floating point), and an advanced pixel engine. He showed demos featuring much smoother motion in high-end gaming and said Vega will support 4K 60 Hz gaming.

For the professional market, Koduri talked about getting more certifications and supporting SSG, which will allow for a 16GB high-bandwidth cache and up to 2TB of on-board SSG NVME flash memory. He showed demonstrations of this working in real-time ray-tracing and on producing 8K video clips in Adobe Premiere.

AMD ROCm Software Stack

Koduri then turned his attention to machine learning, where competitor Nvidia has made huge inroads in deep learning with its GPU-based products and its CUDA architecture. He acknowledged that “AMD is not even in the conversation today,” but emphasized the company’s heterogenous computing approach and what it calls the Radeon Open Compute Platform (ROCm), which supports running machine learning applications on all of the leading frameworks, including TensorFlow and Caffe.

Koduri demonstrated the new product, which will be called Radeon Vega Frontier Edition, and showed that it is very competitive on Baidu’s DeepBench machine learning training benchmark. He said this will offer 13 teraflops of performance at 32-bit, 25 teraflops at 16-bit, as well as up to 16GB of high-bandwidth memory (HBM2), which is four times that of AMD’s current Fury X board. This is due out in late June. One would assume other Vega-based boards will follow shortly.

Koduri said that he believes Threadripper and Naples will be disruptive, in part because the CPU bottlenecks will go away, leaving more room for GPU performance.

AMD Promises ‘A New Day for the Datacenter’

Forrest Norrod, SVP and General Manager for the Enterprise, Embedded and Semi-Custom Business Group, gave details on the Epyc server ship. He said “datacenter leadership” is the company’s No. 1 priority, despite having what he described as “a rounding to zero percentage market share.”

Norrod said the company’s previous achievements in server technology—which include having created the first 64-bit x86 cores, high-speed coherent interconnects, and integrated memory controllers—are what make it “plausible… predictable… inevitable” for it to again compete in this market.

Forrest Norrod, SVP and General Manager for the Enterprise, Embedded and Semi-Custom Business Group

Norrod said that Epyc will include 32 Zen cores, 8 memory channels, 128 lanes of high bandwidth I/O, and a dedicated security engine. But while that sounds like a huge chip, Norrod explained that the Infinity Fabric Papermaster described earlier enables the chip to actually be built from four 8-core die, which makes it much easier and less expensive to produce.

Norrod said that in a two-socket configuration, the chip could offer 64 cores, 4TB memory, and 128 PCI Express lanes, thus giving it 45 percent more cores, 122 percent more memory bandwidth, and 60 percent more I/O than the Xeon E5-2699A v4 (Broadwell). And he presented a couple of demos that showed it outperforming the Xeon in tasks such as compiling Linux.

I was more impressed with a demo of how a single-socket version of the chip could outperform a middle of the market dual-socket Intel configuration, which he said accounts for most of the market. (As always, I take vendor benchmarks with a grain of salt, and suggest you do as well.)

Epyc will offer the “best value for end users,” he said. AMD is looking for leadership in specific segments of the market; Norrod noted it’s likely to get traction at first from the bigger datacenter customers who write their own software. One big difference between AMD and Intel’s approach, he said, is that every Epyc will be “unrestrained, with all of the I/0, memory channel, high-seed memory, security stack, and integrated chipset features supported on all models. (Intel offers certain features only on higher-end models).

Epyc Machine Learning

Norrod said Epyc offers a simpler architecture compared to Intel, and made a big deal about combining Epyc and the Radeon Instinct platform for machine learning. Norrod said Epyc is scheduled to be launched in late June, with more than 30 server models expected to be shipped this year.

AMD server roadmap

Norrod also stressed that this is the first in a series of chips, and presented a roadmap with versions called “Rome” and “Milan” (to follow “Naples”) between now and 2020, using 7nm and 7nm+ processes. He emphasized that it isn’t all about core performance, but rather continuous innovation.

Following a financial presentation, CEO Lisa Su returned to close the event, and told the financial analysts that while the numbers are important, “this company is all about the products.” It’s certainly good for the industry to have more competition in CPUs, graphics, and especially in the data center market. After all, the periods when we’ve seen the most competition have also been the periods when we’ve seen the most innovation.

http://www.pcmag.com/article/353745/amd-introduces-epyc-server-chip-16-core-desktop-chip

Continue Reading

Microsoft Build Focuses on the “Intelligent Cloud” and “Intelligent Edge”

At its annual Build developer conference today, Microsoft made a push for moving toward a world with both an “Intelligent Cloud” and an “Intelligent Edge,” to take advantage of the abundance of data and computing power, as well as new AI algorithms. Not surprisingly, the company wants developers to use its tools, and seems to be particularly working to expand the possibilities of these tools for enterprise developers, while it goes after new markets in areas such as machine learning and massive cloud databases.

The biggest product news was the introduction of Cosmos, a globally distributed service database, which allows developers to have a single system image of a database running all across the world. This works with multiple database models and will enable features I hadn’t seen before, which look to be quite interesting for developers.

In addition, the company announced a number of new development tools, including Visual Studio for the Mac, new MySQL and Postgres-based database solutions, and a bigger focus on serverless and container-based development tools. In addition, there was a long session on AI tools, which included building custom machine learning services and the introduction of a real-time translator plug-in for PowerPoint.

Nadella On the Vision for Intelligent Cloud and Intelligent Edge

Microsoft CEO Satya Nadella started the main keynote by citing some statistics about how well Microsoft is doing in a “mobile first, cloud first” world.

Nadella said there are 500 million monthly active devices now running Windows 10, 100 million monthly active users of Office 365, 140 million monthly active users of Cortana, 12 million organizations using Azure Active Directory, and, of the Fortune 500 companies, over 90 percent are using the Microsoft Cloud. These are impressive numbers, and they show continual adoption of Windows 10 in both the consumer and enterprise space (but are dwarfed by the number of Android or iOS mobile devices), as well as the big adoption the company has seen for Office 365.

On Office 365, Nadella said it provides its own platform for extensions and add-ons, as well as for developers to use features such as single sign-on. Notably missing were any statistics about the success of the Azure platform for general infrastructure-as-a-service and platform-as-a-service, an area where Microsoft faces big competition from Amazon Web Services and Google Cloud Platform, among others.

To that end, much of the keynote was aimed at demonstrating that Microsoft remains current in its developer offerings compared to the other choices, with lots of focus on AI services, Azure functions, and serverless computing—the new directions that most enterprise developers aren’t using yet but which are beginning to become part of development roadmaps.

Nadella talked about how things like agents, bots, natural user interfaces, mixed reality, the Internet of Things (IoT), artificial intelligence, microservices, and advanced analytics and workflows are helping push Microsoft’s worldview beyond “mobile-first, cloud first” and toward “Intelligent Edge” and “Intelligent Cloud.”

In this new world, Nadella said there would be three defining characteristics. He said the user interface will span multiple devices, and include things such as a personal assistant that works across devices. Artificial intelligence will by definition be more distributed, with things like doing the training in the cloud and inference on the edge, with this eventually leading to new ways of doing both training and inference in both places. To make this work, Nadella said there needs to be a be a big change in the “outer loop” of development with microservices, containers, and serverless computation. This is needed to react to change in things like AI models, he said. These trends will profoundly change what happens in Windows, Office 365, and Azure, he added.

Nadella also talked about developers’ responsibility and said that while he is an optimist, there are unintended consequences of technology, and he told the audience that it is up to us to ensure that some of the more dystopian scenarios don’t come true, citing the works of both George Orwell and Aldous Huxley. Practical design choices that enshrine our timeless values, including design that empowers people, is inclusive, and builds trust in technology are essential.

The first demo, presented by Sam George of Microsoft’s Azure IoT team, featured Sandvik Coromant using cloud-connected AI to do preventive maintenance on million-dollar machines, on the Azure cloud and Azure IoT hub. George announced Azure IoT Edge, a cross platform solution that allows cloud functions and code to be added to small IoT devices. In the Sandvik demo, he showed that moving to containers with the functions directly on the machine could reduce latency from about 2 seconds to about 100 milliseconds.

Nadella then talked about using AI and “digital twins” to help improve workplace safety. A video talked about the use of this technology in places like hospitals and construction sites, and Microsoft’s Andrea Carl then showed a demo of using Azure Functions, visual cognitive services, Azure Stack, and commodity cameras to easily create policies and workflows.

Nadella then talked about how the Microsoft Graph allows developers to access people, activities, and devices (through Azure Active Directory), and in particular how this would improve “intelligent meetings.” Microsoft’s Laura Jones did a demo featuring the recently announced Invoke speaker using Cortana with cross-platform skills connecting directly to a time-off system; using Cortana in her car to prepare for a meeting; using Microsoft Teams within Office 365, the Project Rome SDK, and a meetings bot within the meeting itself; and ultimately receiving a summary of the meeting and action items within Outlook afterward.

Nadella concluded by talking about how the future of computing won’t be decided by technology alone, but by the opportunities and responsibilities it offers developers, and showed a video of technology assisting a woman who had tremors caused by Parkinson’s to write and draw.

New Databases and Developer Tools from Azure Stack to Serverless Computing

Executive Vice President Scott Guthrie ran the second part of the keynote, and he gave more details on the “intelligent cloud platform” and the new developer tools Microsoft unveiled at the show.

Scott Hanselman demoed some new management tools such as running the cloud shell inside of the Azure Portal and the Azure mobile portal app for iPhone and Android. He then showed Visual Studio working with production Azure code and adding things like snapshots for debugging. Hanselman also showed Visual Studio for Mac—now in general availability—and how that connects to and enables you to publish applications directly in Azure. He then showed some new functions within Azure’s Security Center.

Guthrie then walked through a number of new announcements for Azure, beginning with a focus on databases. Last month, the company announced SQL Server 2017 for Windows Server, Linux, and Docker with in-database advanced machine learning with R and Python. He said this is available both on-premises or as Azure SQL Database in the cloud. This week, the firm announced a new Azure Database Migration Service, designed to make it easy to migrate SQL Server or Oracle databases to the cloud with “near-zero” downtime. Guthrie said that DocuSign is moving all of its databases from an internal data center to Azure SQL database. He also announced MySQL as a Service and PostgreSQL as a Service, with high-availability and security, and the ability to scale up or down with no application downtime. This should be attractive, and pretty much seems competitive with similar AWS offerings.

The big news was Azure Cosmos DB, which Guthrie described as the first globally distributed, multi-model database service. This automatically replicates data to any region across the world, lets you pick the data model and NoSQL API of your choice (including Document DB SQL, Mongo DB, Gremlin, and graph choices), and also lets you pick the storage and throughput (in transactions per section) that you want. Service level agreements (SLAs) across four dimensions are a unique feature, and improve availability, performance latency: high availability, performance latency (at 10 ms at the 99th percentile), performance throughput, and data consistency. He showed a video describing how Jet has been running this solution and is now running it across 3 U.S. regions, scaling it to support up to 100 trillion transactions per day with single digit latency at the 99th percentile.

Marvel Chat Demo

Microsoft’s Rimma Nehme showed a globally-distributed web app allowing users to ask chat questions of characters in the Marvel Comics universe, and walked through the basic steps of creating such an app running in 9 regions. Nehme said it could accommodate throughput and latency worldwide, but with a single system image so developers can focus on the application rather than the database. And she talked about how instead of having to choose between “strong consistency” and “eventual consistency”, you now have 5 different levels from which to choose performance and consistency.

Guthrie said this service is now generally available in all regions, and because it’s an evolution of the older Document DB service, all of those applications have been automatically moved to the new database.

Containers and microservices were another big topic, and Guthrie showed a video featuring Alaska Airways’ use of these services. Visual Studio 2017 now has improved container support, including integrated Docker tooling and support for development, debugging, testing and deployment. Guthrie said this would work both for “greenfield” applications and for transitioning older .NET applications designed for traditional platforms such as ASP.NET and WCF. Maria Naggaga demoed adding Docker support to an existing application within Visual Studio, with features such as cross-container debugging, and improved telemetry (Application Insights) showing how an application is performing as a whole, or at the container level.

Guthrie talked about Service Fabric for Windows and Linux containers, and other new features that make it easier to deploy and manage containers using Kubernetes, Mesos, or Docker Swarm. He also talked about new features for Azure Functions, including making it easier for developers to create, debug, and deploy their own functions, as well as Azure Logic Apps with over 100 data and app connectors built in. Guthrie said Visual Studio 2017 will support both Azure Functions and Logic Apps, and talked about Azure Application Insights for Azure Functions. The example given for containers and functions was Domino’s Pizza.

Guthrie then moved to Azure Stack, which he said makes sense in situations where companies don’t want or can’t use the public cloud, such as Carnival Cruises running Azure Stack on the cruise ship, since it can’t guarantee good connectivity when at sea. He said Azure Stack meets regulatory requirements and has more certifications and regions than any other public cloud solution, and talked about how EY is running globally in Azure, but using Azure Stack in countries where it needs to meet local data regulations. Microsoft’s Julia White showed how you might build an application with Azure in the cloud and Azure Stack on ships locally, and use serverless functions, some of which go to the cloud, and some of which go to the local server. Guthrie also demonstrated how this fits into a hybrid cloud solution.

Guthrie focused on the many SaaS providers that now use Azure, and Adobe CTO Abhay Parasnis talked about how it is running its “enterprise SaaS” solutions on the platform, which includes more than 90 trillion transactions. Parasnis talked about the scalability of the platform, Microsoft’s focus on security, and new features such as the ability to integrate Adobe Analytics with Microsoft’s Power BI.

Guthrie said Azure provides the easiest way to integrate with Office 365 and services such as Azure Active Directory. He pushed features such as AppSource, which enables third-party developers to more easily sell enterprise SaaS solutions to Office 365 and Dynamics 365 customers.

AI Tools Offer Customization, Translation

Cognitive services were the focus of the final part of the keynote, and Executive Vice President of Artificial Intelligence & Research Harry Shum talked about the company’s tools. “AI is about amplifying human ingenuity,” he said.

Shum said the move to AI has been driven by big computers, powerful new algorithms, and massive data, and said Microsoft has three big advantages in the AI world: the Microsoft cloud, new algorithms developed by Microsoft research, and all of the data in the Microsoft graph. Shum, who has been a vision researcher, talked about Microsoft’s success in both the ImageNet image recognition competition and in speech recognition tests. But he said he is more excited by what developers can do.

Microsoft now offers 29 cognitive services, he said, including a new video indexer and cognitive service labs, but he particularly emphasized new custom services within the different areas, including vision services and language understanding, known as LUIS (language understanding intelligent service). One demo of a new game, Starship Commander, featured the custom speech services, as it requires words and phrases that are unique to the game.

Shum said the most exciting area today is “conversational AI’—based on the “conversation as a platform” paradigm Nadella described at last year’s show. This uses cognitive services and the bot framework to create custom chat and vision experiences. Microsoft’s Cornelia Carapcea demonstrated how this might work using its custom vision service with your own training data, in a feature called “active learning” which automatically can select the images that can add the most value to your model.

Carapcea talked about how new channels have been added to the Bot framework, including Cortana, Skype, and Bing, bringing the total to 12 channels. And finally, also introduced today was something called Adaptive Cards, which lets you build one model that works across multiple channels.


Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/353607/microsoft-build-focuses-on-the-intelligent-cloud-and-inte

Continue Reading

Microsoft Build Shifts Focus to End-Users

The headlines from today’s keynote at the Microsoft Build conference will probably revolve around the Windows 10 Fall Creators Update, due out later this year, and the new photos app and design system that are part of it. In addition, the company is making a big bet on “mixed reality,” with the announcement of new controllers and well-priced third-party headsets.

To me, the big news is Microsoft’s focus on bringing what it calls Windows capabilities to mobile phones in the form of a new Timeline application, which tracks what you are doing and gives you the ability to more easily access recently used documents with a clipboard that works across Windows, Android, and iOS devices.

Terry Myerson, Executive Vice President of the Windows and Devices Group, talked up the success of the current Creators Update to Windows, which came out last month. Addressing the developer audience, he said that “platform wars” have made it harder for developers to create great app experiences across multiple platforms. He said Microsoft is committed to letting developers using .NET, Web, or C++ create applications for the Windows Store for continuous delivery, and is also creating new tools for developers.

More importantly, Myerson announced the Windows 10 Fall Creators Update, which he said would build on the Creators Edition. He highlighted a new application called Windows Story Remix, which lets you combine photos and videos from multiple platforms together and then add special effects—with applications such as Paint 3D and View 3D—bringing “mixed reality” to your videos.

Lorraine Bardeen, General Manager for Windows and HoloLens Experiences, demonstrated how the new video creation tool will work. She explained that the tool will let multiple people who attended an event share photos and videos, whether on Windows machines or on Android or iOS (through an app built using Xamarin), and said that the application automatically creates a montage with the best parts of the videos and photos using AI. The tool can redo the video to focus on one person if you click on that person, and you can also change the style of the video.

You can also customize videos with a story editor and search for people, places, and things, in a feature that uses AI to recognize elements in your photos and videos. You can choose a music theme and the product will re-edit the video to match the beats of the music. Other features include pen support, with written notes able to follow a person moving through a video.

The biggest differentiator is that you can bring in content from the Remix 3D site (which Bardeen said would be offered to developers through an API), and anchor it to an item. (In the example she showed, you could attach a fireball to the image of a soccer ball.)

New Design Language for Windows Developers

Joe Belfiore, Corporate Vice President in the Operating Systems Group, showed off the next iteration of Microsoft’s Fluent Design system, which has been known as Neon. He noted that developers need to use a variety of new input devices, such as voice and pen, and support many new devices, not just PCs and phones, but also mixed reality headsets. The Fluent Design System includes tools and suggestions for using light, depth, motion, material, and scale for thing such as 3D within application design.

Belfiore said the number of ink-enabled devices has doubled in the past year, and said that Fluent will enable pens to offer a more complete interaction model. He showed how this would work within the Microsoft Edge Browser, to do things like handwriting entries or scrolling and selecting using the pen, and then went on to show this feature working in Microsoft Word and to create ink annotations in PDFs.

These capabilities will show up in the Windows shell and Microsoft apps over time, Belfiore said. In other briefings Microsoft made it clear that it will be more of an evolution of the current design (once known as “Metro”) than a complete redesign. It seems like Microsoft may have learned a few things from the massive UI changes that went into Windows 8.

OneDrive On-Demand, Timeline, and a Cloud-Connected Clipboard

Next up were a series of new features designed to connect Windows to multiple devices via the “Microsoft Graph,” which may be the most interesting end-user feature of the show.

“Windows PCs will love all your devices,” Belfiore said, adding that the idea is to bring together all of your files, activities, and content across multiple devices.

The first example was OneDrive Files On-Demand. This will be built into the Fall Creators Update and will be available on other devices. With One Drive Files On-Demand, Windows and OneDrive will make all of your files available on all your devices (which in many respects, it already does), but will now automatically determine which files should stay in the cloud and which should be downloaded to the device itself. You will also be able to pin files to the hard drive manually. With the Fall Creators Edition, the Documents folder will connect to OneDrive, and show you visually where various files are located. Belfiore said this will work with both personal documents and documents created in shared Team sites. He demonstrated this working on Windows and Windows Phone (which got laughs from most of the audience), but said it would also work on OneDrive on Android and iOS.

Another new feature is called Windows Timeline, which visually displays all of the things you have done on Windows across devices; you can also search for items or use Cortana with this data. The demo showed the Timeline moving between a desktop, a notebook, and an iPhone using Cortana. It looked interesting. To make it easier for people to add this to their phones, Windows will include a new Phone icon in the Settings app.

Another new feature is a cloud-powered clipboard, which lets you easily take items clipped on one device and use them on another one. The clipboard will be surfaced in a number of different ways. In iOS, this involves using the SwiftKey keyboard, which you can use to paste in elements from the clipboard. In Office, Belfiore showed a new visual clipboard that gives you suggestions of recently clipped items you may wish to paste. These tools will be available to developers so they can add them to their apps.

I thought this looked quite cool, and can see a real application for sharing business information. I’m more skeptical when it comes to consumers actually using OneDrive to share personal photos, which may limit the use of things like the Story Remix app.

Cross-Platform Development Tools

“Windows PCs will love all your devices,” Belfiore said, because of the Microsoft Graph, the Windows 10 Creators Edition, and the Project Rome SDK.

Abolade Gbadegesin, the architect of the Project Rome SDK, demonstrated how that might work to help developers modernize their applications. He showed migrating a .NET application implemented using WPF (the Windows Presentation Framework), and moving it to a standard shared code using .NET Standard.

Gbadegesin made specific announcements for developers, including .NET Standard 2.0 for UWP (Universal Windows Platform), and XAML Standard 1.0 for Windows, iOS, and Android (which standardizes the UI code across platforms). He showed how this immediately made an application work with touch and mouse, as well as a pen. Other features he announced included new data grid controls and connected animations on multiple screens.

Gbadegesin also described how the company created new APIs that connect People, Activities, and Devices on Windows and on Android via the Project Rome SDK, and announced that Project Rome is now also available for iOS. He showed how developers could use this to move tasks across devices using Timeline or Cortana.

Myerson then returned, and called the Windows 10 Fall Creators Edition a “big opportunity for us to keep customers close to the content, files, and activities they have come to use and love,” as well as an opportunity for developers to modernize apps and use Windows to connect multiple devices

Myerson described changes to Visual Studio that make it easier for developers to test and debug their applications and publish them directly to the Windows Store. He described Windows 10 S—announced last week and running only Windows Store applications—as being aimed at schools, but said it’s also receiving a lot of interest from enterprise customers because of its improved security and performance.

Myerson announced that coming to the Windows Store is not only Spotify, but also things like SAP Digital Boardroom and Autodesk SketchBook. He showed a SketchBook demo on stage and talked about the successes that app has had in selling subscriptions through the platform.

The big news here is that Apple’s iTunes is coming to the Windows Store, with both music and connections to the iPhone, which makes a lot of sense, given that Apple has many Windows customers but is facing more competition from things like Spotify.

Other new features for developers included making it easier to use Linux through the BASH shell on top of Windows, and making it easier to create iOS applications directly on Windows without the need for a Mac. Myerson said Ubuntu Linux is now available on the Window Store, with both SUSE Linux and Fedora coming as well, and demonstrated creating a .NET application for iOS within Visual Studio using the Xamarin Live Player. Things like this are no longer surprising, but it wasn’t too long ago that the concept of Microsoft support for Linux or Apple platforms would have been unthinkable.

Myerson also showed Narrator Developer Mode, which makes it easier for developers to better understand and test how their applications would work for people who are visually impaired.

Moving to “Mixed Reality”

Myerson then shifted to mixed reality, and talked about how HoloLens has been used by Japan Airlines to improve aircraft maintenance, by Mod Pizza to plan restaurants, and ThyssenKrupp to improve the delivery of home elevators.

Alex Kipman, the chief mover behind the HoloLens project, talked about developers using HoloLens, and showed “mixed reality” videos of things like heat maps, 3D CAT scans, and construction beam clearance. Kipman said HoloLens is now in 9 countries, and would add China by the end of the month.

Kipman said virtual reality and augmented reality are just different points in a continuum of mixed reality, and said developers shouldn’t think of it as an either/or proposition. In the future, he said, you won’t have to choose between see-through and occluded headsets, because devices will adapt to your needs. This is why Microsoft offers one Mixed Reality platform that supports a range of solutions, he said and pushed the advantages of Windows 10, such as support for six degrees of tracking without room setup, and headsets understanding where other headsets are located.

Kipman announced a new Windows Mixed Reality motion controller, with sensors inside the headset, so you don’t need outside markers. He said Acer would be offering a bundle of its headset, which it announced a couple of weeks ago, and the new motion controller for $399, in time for the holidays.

Kipman brought up a number of representatives of Cirque du Soleil, who talked about how they are now able to use HoloLens to imagine a new stage for upcoming productions before it has been built. Kipman said this is a custom collaboration with Microsoft, but pushed the idea that developers could create their own such collaborations. “Mixed reality is here today,” he said, and announced that developers can pre-order the Acer and HP versions of the mixed reality headsets starting today, in advance of a holiday consumer ship.

It looks cool, and though mixed reality isn’t mainstream yet, it does appear to be gaining some traction.


Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/353633/microsoft-build-shifts-focus-to-end-users

Continue Reading
1 2 3 5