Living With a Huawei Honor 6X | Michael Miller

With a list price of $250, I wasn’t expecting my experience with the Huawei Honor 6X smartphone to be as good as the experience one would have with one of the flagship Android phones like the Google Pixel XLSamsung Galaxy S7Huawei’s own Mate 9Honor 8bokeh. Huawei stresses that the phone uses a Sony IMX386 Exmor RS sensor, which gives it faster focusing and larger pixels (1.25 square microns) than the typical smartphone. In addition, the Honor 6X includes an 8MP front-facing camera with a wide-angle lens, designed for taking selfies.


In general use, I found the camera to be pretty fast and pretty good in most situations, if not quite up to the best of the higher-end phones. See, for example, the photo of Grand Central Terminal above—it’s nice, but not quite as sharp as what I was able to get with some other cameras. Still, for a typical landscape, portrait, or selfie, the Honor 6X takes pictures you’d be quite happy to view, print, or post on social media.


Honor 6X


The dual camera really comes into play when taking portraits. You get to this feature by choosing the wide-aperture selection from the photo menu; after the photo is taken, you can then adjust the focus point. The feature isn’t perfect—I’ve yet to see a smartphone that can really do this as well as a professional DSLR with a great lens—but it’s far better than I’ve noticed with other cameras in this price range. I was pretty impressed.


I found low-light photography to be okay, if a bit noisy. The “night shot” mode can improve on this significantly, but I found that it really only works if you’re using a tripod or stand, since the mode requires the phone be held steady for 20 seconds or so. The camera also has an interesting time-lapse option and a “light-painting” mode for things like capturing the trails of light from moving cars. There are a variety of filters, a popular feature I personally rarely use.


Honor 6X camera


Like the Honor 8X, the 6X includes Huawei’s EMUI 4.1 user interface, a relatively heavy overlay on top of Android 6.0 Marshmallow. I found it pretty usable, though I can’t say it added much to the basic Android experience. As with many of the Android phones, my experience with the built-in email and calendar applications has been less than ideal. (Of course, you can download others). By default, the 6X does not include the Google Assistant, though it does have the voice-activated Google Now interface.


Overall, what really impressed me was that this is a phone that costs less than half that of the top-end phones and yet looks and behaves similarly. It doesn’t have the cool look of an iPhone or the fancy colored back of the Honor 8 or the Galaxy S7, but the 6X does the job with style, and with bonus features—notably the dual camera—to spare.


Here’s PCMag’s review.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/commentary/351924/living-with-a-huawei-honor-6x

Continue Reading

Explore the Highlights of the Solid-State Circuits Conference (ISSCC)

We’ve heard a lot about Moore’s Law slowing lately, and while that does seem to be true in some cases, in other parts of the semiconductor business, there is ongoing progress. At last week’s International Solid-State Circuits Conference (ISSCC), the big chip trends seemed to be around deploying new materials, new techniques, and new ideas to keep pushing transistor density higher and improving on power efficiency. Of course, that isn’t really news. We saw this reflected in talks about producing logic chips on new 7nm processes, on creating 512Gb 3D NAND chips, and on a variety of new processors.


Chip designers are considering new structures and materials for transistors, as shown in the slide above from TSMC. There were also plenty of discussions of new tools for making the transistors, including lithography advances such as EUV and directed self-assembly, and new ways of packaging multiple die together.


Before digging into the details, it remains pretty amazing to me just how far the chip industry has come and just how pervasive chips have become in our daily lives. Texas Instruments CTO Ahmad Bahai noted in his presentation that in 2015, the industry sold an average of 109 chips for every person on the planet. His talk focused on how instead of markets dominated by a single application—first PCs, then cell phones –the industry now needs to be more focused on “making everything smarter,” as different kinds of chips find their way into a huge number of applications.


The industry faces big challenges, though. The number of companies that can afford to build leading-edge logic fabrication plants has shrunk from twenty-two at the 130nm node to just four companies today at the 16/14nm node (Intel, Samsung, TSMC, and GlobalFoundries), with new process technology costing billions to develop, and new plants costing even more. Indeed, last week Intel said it would spend $7 billion to develop 7nm at a shell of a fab it built a few years ago in Arizona.


Still, there were a number of presentations on various companies’ plans to move to 10nm and 7nm processes.



TSMC has rolled out its 10nm process, and the first chip announced was the Qualcomm Snapdragon 835, which is due out shortly. TSMC may be the farthest along at actually commercializing what it calls a 7nm process, and at ISSCC, it described a functional 7nm SRAM test chip. This will use the now-standard FinFET transistor concept, but with some circuit techniques to make it work reliably and efficiently at the smaller size. Notably, TSMC says it will produce the first version of its 7nm chips using immersion lithography, rather than waiting for EUV like most of its competitors.


Recall that what each of the major manufacturers calls 7nm varies tremendously, so in terms of density, it’s possible that the TSMC 7nm process will be similar to Intel’s forthcoming 10nm process.


Samsung 7nm EUV


Samsung is also working on 7nm, and the company has made it clear that it plans to wait for EUV. At the show, Samsung talked about the advantages of EUV lithography as well as the progress it has made in using the technology.


3D NAND


Some of the more interesting announcements covered 512Gb 3D NAND flash, and showed just how quickly NAND flash density is growing.


WD 3D NAND Bit Density


Western Digital (which has acquired SanDisk) talked about a 512Gb 3D NAND flash device that it announced prior to the show, and explained how this device continues to increase the density of such chips.


WD 3D NAND Die Micrograph


This particular chip uses 64 layers of memory cells and three-bits-per-cell to reach 512Gb on a die that measures 132 square millimeters. It’s not quite as dense as the Micron/Intel 3D NAND design, which uses a different architecture with the peripheral circuitry under the array (CuA) to reach 768Gb on a 179 square millimeter die, but it’s a nice step forward. WD and Toshiba said it was able to improve reliability and to speed up read times by 20 percent and reach write throughput speeds of 55 Megabytes per second (MBps). This is in pilot production, and due to be in volume production in the second half of 2017.


Samsung 3D NAND Bit Scaling


Not to be outdone, Samsung showed off its new 64-layer 512Gb 3D NAND chip, one year after it showed a 48-layer 256Gb device. The company made a big point to demonstrate that while the areal density of 2D NAND flash grew 26 percent per year from 2011 to 2016, it has been able to increase the areal density of 3D NAND flash by 50 percent per year since introducing it three years ago.


Samsung 512 GB 3D NAND Architecture


Samsung’s 512Gb chip, which also uses three-bits-per-cell technology, has a die size of 128.5 square millimeters, making it slightly denser than the WD/Toshiba design, though not quite as good as the Micron/Intel design. Samsung spent much of its talk describing how using thinner layers has presented challenges and how it has created new techniques to address reliability and power challenges created by using these thinner layers. It said read time is 60 microseconds (149MBps sequential reads) and write throughput is 51MBps.


It’s clear all three of the big NAND flash camps are making good process, and the result should be denser and eventually less expensive memory from all of them.


New Connections


Intel EMIB


One of the topics I have found most interesting lately is the concept of an embedded multi-die interconnect bridge (EMIB), an alternative to other so-called 2.5D technologies that combine multiple die in a single chip package that is less expensive because it doesn’t require a silicon interposer or through-silicon vias. At the show, Intel talked about this when describing a 14nm 1GHz FPGA that will have a die size of 560mm2 surrounded by six 20nm die transceivers that are manufactured separately, even possibly on other technologies. (This is presumably the Stratix 10 SoC.) But it became more interesting later in the week, as Intel described how it would use this technique to create Xeon server chips at 7nm and the third generation of 10nm.


Processors at ISSCC


ISSCC saw a number of announcements about new processors, but rather than chip announcements, the focus was on the technology that goes into actually making the chips work as well as possible. I was interested to see new details for a number of highly anticipated chips.


AMD Comparison


I’m expecting the new Ryzen chips using AMD’s new ZEN architecture to ship shortly, and AMD gave a lot more technical details about the design of the Zen core and the various caches.


This is a 14nm FinFET chip based on a basic design consisting of a core complex with 4 cores, a 2MB level 2 cache, and 8MB of 16-way associative level 3 cache. The company says the base frequency for an 8-core, 16-thread version will be 3.4GHz or higher, and said the chip offers a greater than 40 percent improvement in instructions per cycle (IPC) than the previous AMD design.


The result is a new core that AMD claims is more efficient than Intel’s current 14nm design, though, of course, we’ll have to wait for final chips to see the real performance.


As described before, this will be available initially in desktop chips known as Summit Ridge and is slated to be out within weeks. A server version known as Naples is due out in the second quarter and an APU with integrated graphics primarily for laptops is due to appear later this year.


IBM Power9


IBM gave more detail on the Power9 chips it debuted at Hot Chips, designed for high-end servers, and now described as being “optimized for cognitive computing.” These are 14nm chips that will be available in versions for both scale out (with 24 cores that can handle 4 simultaneous threads) or scale up (with 12 cores that can handle 8 simultaneous threads.) The chips will support the CAPI (Coherent Accelerator Processor Interface) including CAPI 2.0 using PCIe Gen 4 links at 16 gigabits per second (Gbps); and OpenCAPI 3.0, designed to work at up to 25Gbps. In addition, it will work with NVLink 2.0 for connections to Nvidia’s GPU accelerators.


Mediatek SoC


MediaTek gave an overview of its forthcoming Helio X30, a 2.8GHz 10-core mobile processor, notable for being the company’s first to be produced on a 10nm process (presumably at TSMC).


This is interesting because it has three different core complexes: the first has two ARM Cortex-A73 cores running at 2.8GHz, designed to handle heavy-duty tasks quickly; the second has four 2.5GHz A53 cores, designed for most typical tasks; and the third has four 2.0GHz A35 cores, which are used when the phone is idle or for very light tasks. MediaTek says the low-power A53 cluster is 40 percent more power efficient than the high-power A73 cluster, and that the ultra-low-power A35 cluster is 44 percent more power efficient than the low-power cluster.


At the show, there were a lot of academic papers on topics like chips specially designed for machine learning. I’m sure we’ll see much more emphasis on this going forward, from GPUs to passively parallel processors designed to handle 8-bit computing, to neuromorphic chips and custom ASICs. It’s a nascent field, but one that is getting an amazing amount of attention right now.


Even further out, the biggest challenge may be moving to quantum computing, which is a whole different way of doing computing. While we are seeing more investments, it still seems a long way from becoming a mainstream technology.


In the meantime, though, we can look forward to a lot of cool new chips.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/351802/explore-the-highlights-of-the-solid-state-circuits-conferenc

Continue Reading

Data Center, New Initiatives Top Agenda at Intel’s Investor Day

Attending Intel’s Investor Day, what struck me the most was how Intel is changing from a company led by the PC client to becoming one that is much more diversified, and one that is increasingly being led by its Data Center business. This was best exemplified by the news that, in a few years, when the company is finally ready with its 7nm process, the first chips created via the process will be Xeon processors aimed at the data center. That’s a big break with tradition—for decades, Intel has brought its newest technology first to processors for clients—once desktops, now notebooks—with server products tending to follow a year or more later.


This is a big part of CEO Brian Krzanich’s plan to position Intel to address a much larger market than the traditional PC and server businesses, which together have a total addressable market of about $45 billion a year. Instead, he said, Intel is going after a much larger market, including the broader data center (covering networking and interconnects), non-volatile memories, mobile (through premium modems), and the Internet of Things—items that together represent a market with a $220 billion total addressable market for silicon by 2021.


All of these markets, he said, build on Intel’s traditional strengths in silicon and process technology. And they are all linked by a need for computing on larger amounts of data in the future, in a vision that sees data collected, moved to the cloud, used for large-scale data analytics, and then pushed back out; but with more computing needed on devices at the edge for real-time decisions as well.



As he has in a number of recent presentations, Krzanich explained that he sees the amount of data growing tremendously, noting that today the average person generates about 600MB of data each day, and forecasts that this will grow to 1.5GB by 2020. While today’s cloud is built mostly on data from people, he said, the cloud of tomorrow will be built mostly on machine data. The average autonomous vehicle produces 4TB of data a day, a plane 5TB, a smart factory a petabyte, and cloud video providers can push out as much as 750PB of video daily. Individual applications could produce even more he said, noting that the company’s “360 Replay” technology used during the Super Bowl and other sports events, consumes 2TB of data per minute. At Intel, “we are a Data Company,” Krzanich said.


I found it interesting that Krzanich said Intel’s top priority for the year is continued growth in the data center and adjacent technologies. This was followed by continuing to have a strong and healthy client business, growth in the Internet of Things business, and “flawless execution” in its memory and FPGA businesses.


Other speakers gave details about each of these markets, including some interesting technology and market trends, as well as financial projections.


10nm Technology and the PC Business


Murthy Renduchintala, who runs the company’s Client and Internet of Things Businesses and its Systems Architecture Group, began by talking about “trying to align process roadmaps with our product roadmaps,” and explained that as an integrated device manufacturer (IDM)—in other words, a company that not only designs semiconductor products but also manufactures them—Intel has several advantages.


Renduchintala compared Intel to an “artisan baker” who not only can make bread but can also work with farmers to decide which wheat germ to plant and where to plant it. This way, the product designers can look at transistor physics three years before a product is manufactured. For instance, he said, Intel used different flavors of transistors for CPUs and GPUs even within the same chip, a level of granularity that Renduchintala said fabless semiconductor companies would find difficult to achieve. (He joined Intel about a year ago, from Qualcomm, which like most other vendors in the industry uses foundries to do the actual manufacturing of its products.)


Renduchintala and Chip Density


Even though other companies are talking about producing chips on 10nm and even 7nm, Renduchintala said that Intel has a three year lead over the others. He noted that rather than focusing only on gate pitch, Intel focuses on the effective logic cell area, defined as cell width by cell height, which determines the overall area of the cell. He said Intel will maintain this lead even after competitors deliver 10nm later this year. Intel plans to release its first 10nm chips later this year as well—Krzanich showed a 2-in-1 laptop powered by a 10nm Cannon Lake processor at CES in January—and this will account for significant volume in 2018, he said.


The economic side of Moore’s law is alive and well despite rising wafer costs, Renduchintala said, noting that the company believes this will be true of the 7nm node as well. But he made a new emphasis on improvements within the process node, saying each of the three generations of 14nm technology thus far has produced 15 percent better performance using the Sysmark benchmark. He believes Intel can continue to do this on an annual cadence, with continued process improvements as well as design and implementation changes.


On the PC business, he noted that even though PC units have been falling, Intel’s profits in the segment grew significantly last year, mostly because of a focus on particular segments, such as PC gaming, where the company introduced a 10-core Broadwell-E platform with an average selling price of over $1,000; and by pushing platform technologies, such as LTE modems, Wi-Fi, WiGig, and Thunderbolt. He noted that the company has grown its mix of higher-end processors and hopes to continue that trend in 2017.


Looking forward, Renduchintala said the client group has made strategic bets on VR and on 5G modems. He noted Intel’s approach to 5G is very different from its approach to 4G, where it initially pushed WiMax, while the rest of the industry settled on LTE. He said Intel now knows it needs industry-wide standards and partners and cited a variety of companies Intel is working with on core networking, access network standards, and wireless radio standards. He said Intel is the only company that can provide 5G “end-to-end” solutions from the “cloudification of the RAN” (the radio access network) to the data center, and said it plans to be shipping samples of its first 5G global modem by the end of the year—using Intel’s 14nm technology—and plans to ship these in the millions in 2018.


Data Center Grows Beyond Traditional Server


Diane Bryant, who runs the company’s Data Center Group, focused on how enterprises are going through a period of transition, driven by the move to cloud computing, network transformation, and the growth of data analytics.


One big change for her group going forward is that it will be the first to launch on the next generation process node, meaning that Xeon products will be Intel’s first 7nm processors. In addition, she said, the data center products would also be the first on the “third wave” of 10nm products. (The first wave of 10nm, for mobile products, is due out at the end of this year, so the first 10nm servers won’t be out until next year at the earliest. Intel hasn’t yet confirmed an exact date for its 7nm process, but it seems likely that it would be in 2020 or 2021.)


A few different factors will make this change possible, Bryant said. First, the Data Center now has enough volume, as it takes a significant number of wafers to bring up a new process. But just as important is Intel’s new use of a packaging solution called EMIB (for Embedded Multi-die Interconnect Bridge), which lets the company cut up a Xeon die into four pieces, each of which can be debugged independently, and then connected via this 2.5D package, so it functions as a single chip. (The new package was actually first announced in 2014, but the company gave more details at this week’s ISSCC conference, and this looks like its first major use.) Until now, a server die was just too big to be used for first production, but by cutting it into pieces, you get a number of smaller die, which are usable.


Bryant and Enterprise Transformation


Bryant noted how Intel’s overall data center business grew 8 percent last year, but enterprise and government sales were actually down 3 percent, while cloud server provider sales were up 24 percent and communications service providers were up 19 percent. Enterprise sales accounted for 49 percent of the business last year, the first time this business was less than half of the group’s sales.


Bryant said that enterprises continue to need more compute—growing at 50 percent per year—but said that some workloads are quickly moving to the cloud, while others are mostly staying on premises. For instance, she said, collaboration workloads grew 15 percent in the cloud last year, but actually shrank 21 percent on-premises. On the other hand, she said, high-performance simulation and modeling require extremely low latency, so it is almost entirely run on-premises. Overall, 65 percent of workflows are now run on-premises, a figure she expects to level out at about 50 percent by 2021.


Bryant and AI Workloads


Broadly defined, artificial intelligence applications account for about 7 percent of today’s servers, Bryant said, with the majority running classical machine learning algorithms in applications such as recommendation engines, stock trading, and detecting credit card fraud. But, she said, deep learning—the neural-network approach used in the prominent image recognition and voice processing applications—accounts for 40 percent. In this area, Bryant talked about how GPGPU instances have gotten a lot of attention, but that overall these still impact only a small percentage of the overall server market: 20,000—30,000 servers out of 9.5 million.


Bryant noted Intel’s intention to serve all parts of the AI market with a series of processors, including the next-generation traditional Xeon servers; packages that combine Xeon with the firm’s FPGAs (through its Altera acquisition); Xeon Phi (with many smaller cores in a new version called Knights Mill that allows lower-precision calculations); and Lake Crest, which includes a chip specifically designed for neural networks, a result of the acquisition of Nervana. The Nervana name is being used to describe the whole line.


Another change is Intel’s increased focus on what it calls “adjacencies”—products that surround the server, including its OmniPath interconnect used in the high performance computing market; silicon photonics, including an on-chip laser providing 100Gbps now, with 400Gbps on the roadmap; 3D XPoint memory DIMMs; and its Rack Scale Design proposal for denser, more energy-efficient server racks. Bryant talked about the increasing importance of the networking market, where Intel is working to convert communication service providers from ARM and custom processors to the Intel architecture, as part of a move to SDN and Network Functions Virtualization. She said she expects 5G to be an “accelerant” in that effort. Bryant also said Intel is now the leader in network silicon (counting both its data center products and the Altera FPGAs, although the slide she showed indicated it is still a highly fragmented market).


3D NAND and 3D XPoint Memory


Rob Crooke, who runs the company’s non-volatile memory group, talked about why now is “a great time to be the memory guy at Intel,” and addressed the company’s plans for both 3D XPoint and 3D NAND flash memory.


I was a bit surprised to hear relatively little on the Optane drives, which Intel is preparing using the 3D XPoint technology. These drives are arriving a bit later than originally expected, but Crooke said that they have begun shipping the first units to data centers, and said the company has a clear path for three generations of this technology. He seemed to be positioning them more as eating into the market for high-performance memory (DRAM) than for the traditional SSD storage market, at least initially, but in the long run, both Crooke and Krzanich sounded very optimistic on Optane, and not only in the data center, but in enthusiast PCs as well, with Krzanich saying that “every single gamer” will want Optane in his or her system.


Crooke said this would be “an investment year” for Optane, with the company expecting such drives to account for less than 5 percent of total storage revenue.


Crooke and 3D NAND Technology


Crooke was extremely enthusiastic when talking about the firm’s plans in 3D NAND. He explained that he thinks Intel has a competitive advantage with its 3D NAND products because its design—created in conjunction with manufacturing partner Micron—offers higher areal density and a better cost than its competitors. Intel currently ships a 32-layer 3D NAND product, but Crooke said it is on track to deliver a 64-layer product for revenue in the third quarter, only five quarters after the 32-layer version shipped; the company is on track to ship 90 percent 3D NAND by the end of the year, he said. Crooke also talked about how Intel is currently producing this at a joint venture fab with Micron in Singapore; how Intel is ramping a big factory in China on its own; and how Intel will work with Micron on another factory.


Crooke and 32 TB of 3D NAND


To illustrate how fast density is improving with this technology, Crooke first held up a 1 TB hard drive, and then showed how the first generation 1 TB SSD was a bit smaller. Then he held up the 1 TB module currently shipping, which looks to be about the size of a stick of gum, and then showed the module Intel will be shipping later in the year, a single thumbnail-sized package. To illustrate how this will impact the density of a data center, he held up a thin 32 TB module designed for a server and said that using this module you could now get 1 petabyte in a thin 1U server, instead of a full rack server, which would be required with hard drives.


Internet of Things & ADAS


Davis and IoT Markets


Doug Davis, who has been running the firm’s Internet of Things group and is now focusing on the advanced driver assistance systems (ADAS) group, talked about both of those areas.


On IoT, he said Intel’s interest is primarily in the value that data has when moving through the network to the cloud, and the application of data analytics, as well as analytics on the edge. He said the difference between IoT and earlier embedded systems is primarily about connectivity and using open platforms. Davis cited a Gartner study that said there were 6.4 billion connected things at the end of last year, an increase of 30% over 2015.


In particular, Davis focused on the retail, transportation, industrial/energy, and video markets, including network video recorders and data analytics moving to cameras and video gateways.


Davis’s biggest focus was on autonomous driving, which he said would be the most visible AI application in the next 5 to 10 years. He talked about how this will require connections back to the cloud and said that while today’s cars use $100 to $200 of silicon (much of this for the infotainment system), by 2025 the silicon bill of materials may increase to 10-15 times that number. Davis said Intel is involved in a number of autonomous vehicle tests, including a 5G trial platform, and has a partnership with BMW and Mobileye for the next generation of such vehicles.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/351692/data-center-new-initiatives-top-agenda-at-intels-investor

Continue Reading

The Top SaaS Vendors, and Why Consolidation May Be Harder Than It Looks

There’s little doubt that more and more applications are moving from on-premises solutions to Software-as-a-Service; that’s been going on to some extent for at least 18 years, since the early days of Salesforce (or even earlier, if you want to count the payroll processing services from firms such as ADP.) In recent years, this trend has picked up a lot of momentum.


After hearing Oracle co-CEO Mark Hurd suggest that by 2025 two companies would account for 80 percent of all SaaS revenues, I decided it would be interesting to see where the market is now, and just how consolidated it is. It turns out that it’s actually pretty hard to estimate just how large SaaS revenues are and to compare the different types of companies. After all, some companies, such as Salesforce and Workday, are “cloud-native,” and only offer cloud solutions. But the biggest software vendors have also purchased more SaaS solutions. For example, Oracle acquired NetSuite, and SAP bought ConcurFor more on these, see my last post)


I also left out a number of security and networking vendors, since these aren’t really general productivity applications, as well as a few obvious vertical market solution providers (such as athenahealth and FIS), as they aren’t general SaaS providers.


Then there are the vendors who just don’t give enough detail to make their SaaS revenues at all clear. Amazon WorkSpaces, for example, is probably a rounding error in comparison with the company’s long list of infrastructure and platform services. Similarly, G Suite belongs here somewhere but Google doesn’t break it out, and it is certainly a small part of the company’s overall revenues. The same thing is true for some of the larger, more diversified technology companies: Dell Technologies offers a number of SaaS products, such as Spanning and Boomi, but doesn’t break out the numbers, and this is probably a small percentage of revenue. The same is true for Cisco.


Of course, the biggest issues come up for companies where SaaS is a significant percentage of revenues, but where the definitions are not clear. It’s tough to break out cloud revenues among companies that are more diversified and offer both SaaS and on-premises software, maybe even some hardware. So I’ll admit that these are just guesses, and would love any comments that would help to make them more accurate. Here’s the list, but pay attention to the notes below.



1) Microsoft said its “commercial cloud revenue” run rate grew to more than $14 billion, suggesting quarterly revenues of about $3.5 billion, which would be split among its Azure (IaaS and PaaS) offerings and its Office 365 and Dynamics 365 SaaS services. It’s total “productivity and business processes” group, which would include these products as well as traditional Office and on-premises offerings, did $7.4 billion in revenue. I’m going to guess that Office 365 and similar products are a bit bigger than Azure, so let’s say $2 billion.


2) For ADP, unlike most of the companies on the list, it’s mostly a question of what is software and what is a service. The company said it did $2.3 billion in revenues of “employer services”—essentially human capital management and HR services, including payroll. Some people would call this SaaS; others wouldn’t. Since it competes with companies like Workday and Ultimate Software, I’m including it. If we call half of it SaaS, that’s $1.15 billion, landing it near the top of the list.


3) Adobe reported a run rate of $4.01 billion for its “digital media annualized recurring revenue” including its Creative Cloud and Document Cloud products. Turning that into a quarterly number would make it about $1 billion. Much of this is client software delivered in a cloud model (just like with Office 365), so as with ADP, I’m counting half of that, or $500 million, and then adding the $465 million from its Marketing Cloud product.


4) Intuit is an interesting case, in that its business is highly seasonal, since its tax preparation and electronic filing software is used much more in the early part of the year. The consumer part of that business is mostly online, accounting for 90 percent of the company’s TurboTax users in its big quarter, and virtually all of the users in the current, smaller quarter. In its last reported quarter, the consumer tax business accounted for $42 million in revenue, but in the previous quarter, it was $1.6 billion. Meanwhile, QuickBooks Online and related products accounted for $179 million. So for the most recent quarter, the SaaS number would be roughly $221 million (not counting desktop or enterprise versions of QuickBooks, other small business products, or the professional tax business). However, that’s not representative of the year as a whole. I took the full year’s consumer tax business (just shy of $2 billion), took 90 percent of that, divided by 4 to get a “typical quarter,” and added in the QuickBooks online number, which gives me $662 million. This seems more representative, arguably.


5) IBM doesn’t distinguish among the different kinds of cloud revenue it earns and calls some things cloud that I wouldn’t, but I’m listing it with $600 million, based on reported cloud revenues for its cognitive services group, which includes Watson and other analytics. Based on most definitions, that’s probably high, but it’s the best I could find.


6) Oracle reported $878 million in combined SaaS and PaaS services, meaning a combination of its cloud-based applications, such as HR and CRM as well as database and similar services. For much of Oracle’s business—specifically, the apps that make up its E-Business Suite—customers need to be running both the application and the database platform it runs on. I’m taking half of the revenues, which would result in $439 million in quarterly revenue. (Note that NetSuite, which Oracle acquired, had $230 million in revenue in the second quarter).


7) Dropbox is a private company, but its CEO recently reported it was on a $1 billion run rate, so I’m taking this as $250 million in quarterly revenue.


That’s the best I’ve been able to come up with, though I know it’s far from perfect. I’m sure I didn’t make all the right decisions, so I’d love any feedback on how to improve this list.


One thing that does stand out: of the top 20 vendors I found, the top two account for 40 percent of the revenue, a pretty strong percentage, but still a long way from the 80 percent concentration that Oracle predicted. However, if you look at total revenues and exclude IBM and HP (where applications are a very small part of the revenue), the two large vendors—Microsoft and Oracle—account for 64 percent of the total revenues. (Of course, these two also offer many things beyond applications software.) If you assume the bulk of applications revenues will convert to SaaS over the next few years, that may be a better predictor of how the revenues may break out. Also, recall that I’m excluding Amazon and Google, either of which could be considered part of this chart.


In other words, it seems quite possible that we’ll see significant consolidation in the field, either through the big companies growing their percentage of SaaS revenues or through acquisitions. However, getting to 80 percent seems like a tall order. Stranger things have happened, but it doesn’t look likely to me.


Again, I know I’m making a bunch of assumptions in creating this chart and would love to see more accurate estimates of SaaS revenue for the more diversified companies. I’m open to suggestions.




Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/351605/the-top-saas-vendors-and-why-consolidation-may-be-harder-th

Continue Reading

Techonomy and the Economy: Is Change Happening Faster than Society Can Absorb it?

To me, the most interesting topic at last week’s Techonomy 2016 conference was the impact that technology and data are having on the economy as a whole. As the conference immediately followed the election, it was a topic that came up in a variety of sessions—with a surprising number of comments about how changing technology has made many people uneasy, and how that may be hurting the economy and affecting how people vote.



“Change is happening at a much faster pace than society can absorb the changes,” Tony Scott, the Federal CIO of the United States, said in the opening panel, noting that changes in technology, energy, and other areas are fundamentally changing where jobs are and how people live. Still, he said, “relentless digitalization” is inevitable.


Simulmedia CEO Dave Morgan noted that job loss to technology will only intensify, as 1.5 million driving jobs—the largest single job category for white men outside of the government—will disappear over the next 4-5 years. (I believe he is wildly overestimating the pace of change here, but we’ll see.) Morgan stressed that, though economic issues are important, dignity is also important; in the small city in Pennsylvania where he grew up, people not only used to have jobs, they felt good about them.


Morgan referenced a 1946 book by Peter Drucker, Concept of the Corporation, which lamented the growing use of cost accounting, and argued that the relationship between labor and management had changed. In the 1950s, Morgan said businesses paid a living wage, offered health plans to cope with catastrophic incidents, and offered pensions, so workers participated in the growth of a company. Over time, pensions have disappeared, fewer companies offer health insurance, and wages are now considered costs.


Blackberry CEO John Chen said the bicoastal tech industry has largely missed the concept of jobs, and this has led to some of the anger directed toward the industry. Chen said he supports infrastructure investment and stressed the importance of cybersecurity.


Scott agreed that some paradigms need to be reexamined. He noted that we have an assumption that everything should interoperate with everything else, but in the near future, we may need to ask whether the system you might connect to is safe and performing the way it should.


Scott said that the government is on an unstoppable track to digitization that should improve interaction with the citizenry. For instance, he said that today’s technology pretty much follows the org chart, so you need to understand the organizational structure to locate a site for the information you’re after. This, he said, will change no matter who is president.


Similarly, Scott said the federal government spends $85 billion a year on technology, with more than 80 percent of this to simply “keep the lights on.” We are now “air-bagging and bubble-wrapping old stuff” for cybersecurity, but said that we need to upgrade and replace systems in order to get to a more modern platform. Scott mentioned that there was a bipartisan bill to create an Information Technology Monetization Fund to hasten IT advancements and upgrades at the federal level.


There were a number of good questions and comments from the audience. Gary Rieschel of Qiming Venture Partners, who spoke in an earlier session, said there is a perception among Trump and Sanders supporters that “America is no longer fair.” Where you live and how much money you have determines your quality of education and access to healthcare, Rieschel suggested, and while technology may help, it can only do so if it comes from the citizens up, and not from the top down. Rieschel pointed out that, until the 1970s, unions had large apprenticeship programs, but since then the skills of workers have eroded as older workers retired and younger workers weren’t retrained.


Roger Pilc of Pitney Bowes talked about how technology has helped to democratize international trade. He quoted Alibaba’s Jack Ma as saying that over the last twenty years this has mostly helped large businesses, but that over the next twenty it may help medium and small businesses. Pilc pushed things like shipping and logistics, citing cloud technologies, APIs, mobile, and IoT as items that can help smaller firms, and noted that most job creation comes from small and medium-sized businesses.


Others in the audience talked about how technology may not be the answer; how U.S. companies could build call centers and even coding centers in Middle America; and education. I noted a comment that the technology industry should not be surprised by the anger in the country, as many groups—especially women and minorities—are also angry at how they have been treated by tech.


The Economic Impact of Data Convergence


Annunziata, Farrell, Kirkpatrick


I was quite interested in a conversation on the economic impact of data convergence, which featured GE’s Chief Economist Marco Annunziata and Diana Farrell, Founding President and CEO of the JP Morgan Chase Institute and a former Deputy Director of the National Economic Council.


David Kirkpatrick, who moderated the discussion, said that data shows that life is improving in almost every major country. But Annunziata said that in most cases, the narrative is more powerful than the data. He said there is a lot of hype around data, but that the impact of data on the economy has been small. Going forward, however, Annunziata talked about using data to generate value.


Farrell said that one big problem is that while the overall economy has strengthened, the level of anxiety remains high. She said take-home pay has been particularly volatile, with 55 percent of Americans seeing a swing of income of over 30 percent month-to-month over the course of a year. Farrell said a fear of “the liquidity trap”—a concern of running out of liquid money—is true for almost all Americans.


Farrell said that the “gig economy” employs about 1 percent of adults in a given month, and only 4 percent of adults over last three years. These are primarily young and disproportionately low-income workers, who mostly view such work as supplemental income, used to offset volatility but not as a replacement for a job.


In a discussion of how people view data, Ford Motor Co. VP of Research and Advanced Engineering Ken Washington said that even though the government has lots of data on people, it is all in silos, and thus it is incredibly difficult to obtain holistic information on an individual. Washington said there were few ways for either the government or commercial companies to pull this information together, and said people are frustrated that the data is out there but not improving their lives.


Annunziata agreed, and said it seemed strange that the government “knows all this information about me, but treats me as a stranger when I go to the airport.” Annunziata worries about things like data sovereignty laws in Europe. He said ringing a fence around data doesn’t make it secure, and that by preventing data from being aggregated could negate the value of the data.


On the question of government use of data, I was interested in a separate discussion with Marina Kaljurand, former Minister of Foreign Affairs for the Republic of Estonia. She talked about how her country had created an “e-lifestyle” that started with government digital systems used to pay taxes, to vote, and to receive report cards. This was based on digital signatures using two-factor authentication and the goal of having a “paperless” approach to government. I think that’s an interesting goal, but one that seems hard to reach in a country as diverse as the U.S., where individual states have their own policies and rules.


Overall, I wonder if Silicon Valley overestimates its direct impact on the economy, but underestimates the secondary impacts of the new technologies it creates.

http://www.pcmag.com/article/349600/techonomy-and-the-economy-is-change-happening-faster-than-s

Continue Reading

Waiting for the Singularity at Techonomy

As it has been at just about every conference I’ve attended this year, artificial intelligence and machine learning were major topics at last week’s Techonomy 2016 conference. In addition to now-standard discussions of where AI is headed, a talk from Ray Kurzweil, and a conversation on where autonomous vehicles may be headed, the conference included a discussion and videos of direct machine-to-brain interfaces that were among the most interesting things I’ve seen all year.


Circuits of the Mind
Those cool videos came from Justin Sanchez of the DARPA Biological Technologies Office. He showed one video of a mind-controlled robotic arm, which was fascinating, before moving on to discuss a direct neural interface, in which computer memory is attached directly to a brain with traumatic brain injuries. Sanchez then showed a compelling video in which a patient is asked to memorize a dozen common words; normally the patient can later recall only three words, but when attached to the system, the patient could recall all twelve.


Sanchez cautioned that these are very early days for the program. It is designed to restore brain functions to military personnel who have paid such a price for our country, but he said there are many exciting aspects. Work has begun with racks of computers; the goal is to work toward miniaturized systems that could be implantable. As part of this effort, the program seeks to gain a broader understanding of the cognitive functions of the brain.


Sanchez was joined in a panel by Leslie Valiant of Harvard University, who described what he called the “ecorithm era,” which combines algorithms that learn from the environment, supervised machine learning, and biological evolution. Valiant said that Darwinian evolution is basically a kind of supervised machine learning.


He noted that there is a lot we still don’t know about brain function, such as how many neurons it takes to remember what you had for breakfast. Sanchez noted that we are learning more about the brain, and also about how memory is distributed throughout the brain.


Both agreed that while supervised machine learning might work to start algorithms for augmenting the brain, other techniques like reinforcement learning will eventually be needed. A fixed algorithm won’t work in the long run for everyday life, Sanchez said. Instead, it will need to adapt.


Towards The Singularity and Ethical AI



In a dinner speech, inventor and author Ray Kurzweil, who now works on AI for Google, reiterated his prediction that by 2029 a computer will have good enough language skills and knowledge in a full range of subjects to enable it to pass a value Turing test. By 2035, Kurzweil believes we’ll be able to connect computers directly to our neocortex to expand our memory, and by 2045, we’ll have computers a billion times more powerful than every human, a development he calls The Singularity.


Kurzweil said the big breakthrough in AI in recent years has been the development of multiple-layer neural networks, but noted that current systems require a lot of data. “Life begins at a billion examples,” he joked about the current systems, and said a big challenge has been developing computer systems that can learn from smaller amounts of data.


Kurzweil was joined on a panel by Benjamin H. Bratton of the University of California, San Diego, and Vivienne Ming of Socos, who stressed that AIs and humans will work together in the future. Bratton’s book, The Stack talks about how the recent advances in computing, including automation, are creating an “accidental megastructure” that is both a computational apparatus and a new governing architecture. Ming talked about AI augmenting humans, and the need to build a world where people actively create new things.


In another session, Francesca Rossi of IBM’s T.J. Watson Research Center talked about the need for “ethical AI” saying we need to have a discussion of what rules should govern AIs. This discussion should include not only the top 5 companies that people think of when talking about AI, but everyone, especially people deploying AI in the real world. The goal is to build trust over a period of time, not just once, she said.


Autonomous Vehicles in Sentient Ecosystems


Delaunay, Hodjat, Washington, Chui

(Delaunay, Hodjat, Washington , Chui)


In a panel on “sentient ecosystems,” Ford Motor Company VP of Research Ken Washington said there is a promise of both autonomous vehicles and smart vehicles that know us, based on radar, lidar, cameras, microphones, and other sensors that can process and respond. However, while progress is happening very quickly, we are not there yet. For instance, Washington described a car that will automatically turn the heat on when it’s cold outside.


He said there are two “potholes” on the road to this vision: cyber-security and privacy, which he sees as two distinct issues. Washington said consumers will need to be able to trust that an autonomous car will do good things for them, and he is confident that autonomous cars can be safer than a human driver, noting that 30,000 people a year die in car accidents. Washington also said companies need to be clear that the consumer owns their data, and grants the car companies permission to use it for particular purposes. Ford will never sell your data, he said, but will use it to keep you safe and to give you a better experience. Ford plans to offer high-volume production of vehicles for ride sharing in 2021, with 100 test vehicles on the road by 2018.


Claire Delaunay, of autonomous trucking company Otto (now part of Uber), said one issue has been how an autonomous vehicle makes a decision. Vehicles can only see the things you teach them to see, she said, so they need to keep learning. Sentient Technologies co-founder Babak Hodjat said that because such systems have a log that contains the data utilized in each decision when accidents happen, future accidents can potentially be prevented. “We can’t do that with a human,” he noted.

http://www.pcmag.com/article/349680/waiting-for-the-singularity-at-techonomy

Continue Reading

Spreading, Securing and Regulating the Internet of Things at Techonomy

The many different applications of the Internet of Things (IoT), the new business models that IoT enables, and the issues involved in securing and regulating these were a big topic at last week’s Techonomy 2016 conference. I was particularly interested in hearing about several new examples of IoT uses and more concrete ideas regarding regulations and security.


The Vast Internet of Things
One interesting panel covered some of the more unusual projects that are now being considered as part of the Internet of Things.


Sara Gardner of Hitachi Insight Group (above, second from the left) discussed using IOT to automate factories, to help companies move from selling products to offering services, and for improving “social infrastructure” like transportation and energy. Gardner discussed using IoT devices to improve safety in places like mines, where remote operation might help keep people safe, and things such as facial recognition on cows in agriculture, to improve herd management.


Eric Topol of the Scripps Research Institute (third from left) talked about “digitizing our bodies” using information taken from electrocardiograms, and devices to monitor conditions such as sleep apnea and ear infections. The goal is to anticipate problems before they occur, furthering preventative care. Usually, Topol said, this happens with wearables, not through sensors embedded in the body, though in some cases—such as preventing heart attacks—embedded devices may prove necessary.


Topol said the concept of the hospital as we know it today will change. Instead of waiting weeks to get an appointment with a primary care physician, he anticipates future doctor visits will often involve video chats and data exchange, possibly supplemented by a doctor visiting you. People should only have to go to a hospital for very specific procedures, like surgery. One issue, he said, is that a patient needs to own his or her data. Topol said that right now everyone has that data except for the patient; this situation has to change. “Medical technology will completely reboot healthcare,” he said.


Tom Barton of Planet Labs (far left) talked about the company’s goal of imaging all of the land area of the planet at least once a day using consumer electronics in order to build much smaller, much less costly satellites. Barton said the company has already launched fifty satellites and plans to launch another hundred. Most of the company’s customers today are in agriculture and government, and he described applications ranging from land use categorization and tracking deforestation to improving agricultural yield management and the size of the global food supply.


Smart Cities and Smart Seas


(Gaudette and Regas)


Another panel discussed the concept of “smart cities” and how this is much more complicated than it is often portrayed. Martin Powell of the Siemens Global Cities Centre of Competence discussed how using data to create “smart cities” might have different impacts than you would expect, such as how banning bikes in London would actually reduce pollution.


Mrinalini Ingram of Verizon talked about how citizens are now able to play a larger part in the process of managing cities. Moderator Gary Bolles mentioned that so many people use Waze in L.A. that residents in some newly-trafficked zones are complaining or trying to feed the app false information. Powell said that generally you can’t give citizens planning decisions, but you do need to take data, aggregate it, and control it at the municipal level.


Assaf Biderman of MIT’s SENSEable City Lab said citizens need to feel a sense of ownership of data, though some things can’t be voted on. He also said that many offerings won’t come from the cities themselves but will instead come from the outside.


A panel on “smart shipping” talked about bringing IoT concepts to shipping, with moderator Simone Ross noting that despite 90 percent of global trade moving on the oceans, this is a “data dead zone.” Peter Platzer of Spire Global talked about his company’s plans to use small satellites to traverse the ocean three or four times an hour instead of only a few times a day, while Anthony DiMare of Nautilus Labs discussed his firm’s plans to bring big data analytics to the shipping industry, asserting that Nautilus could use data and analytics to reduce fuel usage by as much as 30 percent.


John Kao of maritime security firm Thayer Mahan discussed the problem of visualization from the top of the ocean down, saying that this issue has important implications for mining, fishing, and geopolitical affairs. Platzer noted that there is currently $10 billion in annual piracy on the seas, and said 80 percent of the time we don’t know exactly where the ships are. Kao agreed with others that there are few laws and no overarching framework on maritime security, and talked about how we have little monitoring of undersea cables or ports.


All of the panelists agreed that in the years to come we must develop the ability to know more about the location of ships on the ocean, as well as what the conditions are underwater.


Another area covered was the power grid, and using IoT to combat climate change. Robert Gaudette of independent power provider NRG Energy said that IoT creates a whole lot of demand for energy, but also help us to manage the load on the grid.


Diane Regas of the Environmental Defense Fund said that some countries have now gone up to four days using only renewable energy, but agreed that IoT could open up new energy solutions, including allowing people to adjust both the demand side of energy as well as energy production. She addressed the issue of incentives for the utility companies and said this must change from utilities being rewarded in terms of how they invest, to the utilities being rewarded by performance, which would include how much they lower emissions.


Medical Applications


Tas and Tyson

(Tas and Tyson)


Perhaps the sector most ripe for change is health care. “We have a real opportunity to transform the health care system in this country, and indeed the world,” Kaiser Permanente CEO Bernard Tyson said. In the past year the majority of Kaiser interactions with patients took place using a “care anywhere” model; a secure “e-visit” over a phone, tablet, or PC, instead of a patient going to the hospital.


Tyson explained that Kaiser is a fully integrated system—it brings together insurance coverage and health care—and that this model is designed to facilitate prevention, early detection, and early treatment.


Philips CEO Jeroen Tas said that 80 percent of the cost of health care today is related to chronic disease, which is often influenced by social factors as well as personal choice. Today health care is reimbursed around acute events, but it should instead be covered based on the outcome. “If you pay for sickness, you get sickness; if you pay for health, you get health,” Tas said.


Tas suggested we need a new way of delivering care, and that much of it will no longer take place in a hospital. Instead, friends and families could be part of a new way of organizing care. He mentioned that the UK’s National Health System now has 1.5 million volunteers to assist with care.


Both men agreed that you can reduce health care costs, but only if you reimburse on outcomes, not on care, with Tyson noting that Kaiser doesn’t get more money by having more patients in its hospitals. Tyson noted that medical devices can augment humans, in terms of helping to make early detection available before a problem develops. As examples he mentioned warning about dehydration, measuring blood sugar in people with diabetes, and simply checking to determine if medications are being reordered properly, which gives an indication that they are being taken. (There was a suggestion that half of all medication is not actually consumed.)


Tas suggested that AI will play a bigger role in making sure the trail of diagnosis, treatment, and medication prescribed stays within the patient record, making it easier for doctors, patients, and clinicians to easily access pertinent information. He is seeing a lot of progress in this area, and in a couple of years, the systems will often support decision-making.


The Role of Government


Murthy Renduchintala


In a separate conversation on IoT and the U.S government, Intel president Murthy Renduchintala envisioned an environment that fuses together computing, pervasive and ambient communications, and advanced machine learning. He talked about computing and intelligence at the edge of such an environment being able to contribute information to a central data repository and then receive aggregated information after it is processed.


Overall, he said, the U.S. does a good job of gathering information, but many other countries have a head start in areas like regulation for self-driving cars, or smart cities. In technology, the U.S. is doing a phenomenal job, but “in terms of harnessing technology, we’ve got some catching up to do.”


Renduchintala said areas such as autonomous driving, robots, and drones seem to demand some degree of legislative involvement, and talked about possibilities as to when such systems may go mainstream. He talked about three main areas where the government could get involved: basic legislation, or understanding what is happening and how the government can be catalyzing technology to move forward; a national R&D strategy, which is about making sure everyone is playing in the same sandbox, and especially in areas like autonomous driving and drone regulation; and finally, the government must play a key role when it comes to security, noting this isn’t just about the end device, but also about protecting information as it is transferred over very large networks.


Renduchintala said there is a burden on industry to better educate everyone concerned, including Congress. He expects to see an economic benefit from IoT, particularly in areas such as autonomous cars, because people spend a lot of time in traffic jams that could be better harnessed in productive areas.


GE and “Productivity of Things”


Ruh


GE Digital’s Bill Ruh (right) talked GE’s digital transformation. We are moving from a world where owning an asset is the value to one where the data and services around the asset are becoming more important. A few years ago, he said, GE CEO Jeffrey Immelt could see that the combination of data, AI, and statistics coming together could make assets more efficient and that a company that could figure out how to best maintain and use the assets had the potential to disrupt GE. So, the company decided to get out in front of this transformation. Over the next three to five years, every industrial company will go through this transition. “If they don’t, someone else will,” Ruh said.


Ruh discussed GE’s “digital twin” concept and said that while AI is great, it only addresses part of the problem unless you know the details of the actual physical device. Instead, the company combines modeling and AI to run simulations—starting with digital twins for wind farms, and then moving into other areas such as power plants and rail networks, with systems designed to figure out the order in which to move trains within a system. Many of these decisions were not intuitive, he said.


Overall, Ruh said, the most important takeaway has been the “productivity of things,” and said the three sexiest words in the industrial world are “zero unplanned downtime.”


Ruh said he worries about trade policies and data sovereignty, but that in the end, it’s a job issue. He noted that we have seen slower GDP growth and an increase in automation that has in turn driven down lower-skill jobs.


Ruh said that while automation will happen, we may see a shift from labor arbitrage automation to local content, with more manufacturing moving local, in part due to things such as additive manufacturing (3D printing). “We don’t know how it will play out,” he said. Ruh did say that regulation will be needed, but that it needs to be smart and not unintentionally block innovation.


Securing the Internet of Things


Bartolomeo,  Cooper, Eagan, Rill, Canary, Higginbotham

(Bartolomeo, Cooper, Eagan, Rill, Higginbotham)


Stacey Higginbotham of the Internet of Things Podcast moderated a session on IoT security and pointed to things like the recent Mirai botnet, which used millions of old connected webcams to produce a recent distributed denial-of-service attack that left many websites unreachable for some time.


Mark Bartolomeo, VP of Internet of Things M2M Connected Solutions at Verizon, agreed that security is a huge problem, but said it is a problem we’re solving. Bartolomeo pushed for more security standards for devices but said we also need network security, host and IT security, and better employee training. He said security is “a problem we’ll never stop working on,” pointing to Verizon’s recent study of 100,000 breaches. (Later, I had a good conversation with him about how quickly Verizon expects IoT to grow as connectivity prices decline, and also about the complexity of deploying such systems today and the need for better security.)


Betsy Cooper, executive director of the Berkeley Center for Long-Term Cybersecurity, said we cannot fully be secure. Her group works on many possible scenarios. There is always a risk of a small degree of failure, she said.


Darktrace CEO Nicole Eagan said there is no perimeter anymore, and agreed that if you have a sophisticated threat actor, they will get in. Instead, Eagan said, it is important to have visibility into devices to see what is going on and react accordingly. To that end, her firm’s product emulates the human immune system.


Chris Rill, co-founder and CTO at Canary, which makes a wireless home security product, said one big question is who you are trying to protect yourself from. Protecting systems from “script kiddies” is possible, he said, but defending against government actors is much harder. Rill said some companies like his really care about security, while others just treat it as a checkbox feature. While he hopes that in the future security will help drive demand for particular products, he admitted that consumers generally don’t ask for the most secure products.


The panel discussed various ways of making sure devices are secured, with a number pointing to ICSA certification in multiple layers of devices. Rill talked about how Canary specifically doesn’t open more ports than necessary, while Eagan talked about how connectivity concerns have led many devices, including most video conferencing systems and even nuclear power plants, to be vulnerable to specific attacks. She said if you can watch network traffic, you can detect anomalous behaviors early before damage is done, and that the next step will be “algorithm vs. algorithm,” particularly as nation-states start tapping into math experts to create their own AIs.


Regulation was also a big topic, with Cooper decrying the “balkanization of cybersecurity regulation.” Cooper said we need to get the right players in the same room, and suggested that the person in charge of cybersecurity at the White House should have elevated responsibilities. Eagan noted that lots of devices are not actually created in the U.S., and Rill talked about some of the steps Canary has taken to ensure it knew every component that goes into its product during their manufacture in China.

http://www.pcmag.com/article/349728/spreading-securing-and-regulating-the-internet-of-things-at

Continue Reading

Living With an iPhone 7 Plus

For the past several weeks, I’ve been using an iPhone 7 Plus, and have found it to be a wonderful smartphone. Compared to the iPhone 6s Plus, the biggest changes are the improved camera, a changed home button, and of course, the lack of a traditional analog headphone jack. I got used to the home button and the headphone changes fairly easily, and find that my day-to-day experience really hasn’t changed much—it’s a fast, powerful, phone that is arguably the best on the market today.


From a design perspective, the 7 Plus really doesn’t look all that different from the 6s Plus, or for that matter, the 6 Plus. Visually, the big difference is a wider opening for the rear cameras and no phone jack. From a distance, you can’t really tell them apart.


It still measures the same 6.23 by 3.07 by 0.29 inches—quite reasonable for a phone with a 5.5-inch display—though Apple has shaved a bit off the weight. Unlike some of my colleagues, I like the big size—I find it fits in my pocket (just barely—6-inch phones typically don’t) and I find the bigger screen is simply better for reading or Web browsing. But I know that’s a personal preference.


The 5.5-inch IPS display continues to have a 1920-by-1080 resolution (401 pixels per inch), which I find to be terrific, though not quite top of the market. In other ways, the screen is better, with greater brightness giving it a higher contrast ratio and a wide color gamut providing the most accurate color of any display on the market. (DisplayMate has more details on this) All I can tell you is, it looks great.


The processor has improved, with Apple talking about how its A10 Fusion chip now has two high-power cores and two high-efficiency/low-power cores (two fast and two slow), similar to lots of the processors on Android phones. It also includes a six-core GPU, which Apple says is 50 percent faster than last year’s A9. Benchmark results look very good, and it did feel a bit faster in actual use than the previous model, but not enough so that I’d call it a significant change.


It weighs 6.63 ounces, a slight improvement from the 6.77 ounces of the previous model, though you won’t notice the difference. It has a 2900 mAh battery, and Apple claims it lasts up to an hour longer than the previous model. PCMag’s tests also show a significant improvement in battery life, but in the real world, I can’t say I noticed it. It still gets me through a full day of normal use without a problem, but I have to charge it every night.


One relatively minor change in this year’s model is that Apple chose to go with a home button that uses “taptic” feedback (effectively vibrations) rather than a button that physically depresses. Even though I’ve used virtual similar buttons extensively on Android phones, it took a bit of getting used to on an iPhone. It doesn’t feel the same, but once you become accustomed to the new home button, it works great. I find Apple’s fingerprint recognition to be a bit better than on any other phone I’ve used.


Perhaps the thing that has gotten the most attention on the iPhone 7 is how Apple removed the analog headphone jack. That has taken a bit of getting used to. The phones come with earbuds that connection to the Lightning port, and also with a cable to connect the lightning port to a traditional headphone jack so you can use your existing wired headphones. Those both work fine. Still, it’s a pain to carry around another dongle; and you can’t use it when you’re using the Lightning port to charge the phone. Apple’s own AirPods wireless earbuds haven’t shipped yet, but there are many alternatives on the market. I was happy with the Plantronics BackBeat Go 3 earphones, which sounded quite good (though of course, wireless headphones are another thing that needs to be charged). One tradeoff for the lack of an analog port is the addition of stereo speakers, which sounded pretty good.



The biggest change on the 7 Plus—and indeed, other than the size, the one thing that really makes it different from the smaller iPhone 7—is the additional of a second rear-facing camera. The phone uses two 12-megapixel cameras with an aperture of f/1.8, with one normal camera and one described as a 2x optical “telephoto” lens. Compared with the previous generation, which had an aperture of f/2.2, the new lens lets in more light, which generally makes for better photos, particularly in low light. The second lens may not technically be a telephoto but it does have let you focus closer on items though with a smaller field of view.


Grand Central iphone 7+ zoom


You get to the second lens by pinch-and-zooming on the screen or hitting the zoom circle in the camera app, and in general, it worked pretty well, letting me getting somewhat closer pictures. But it’s still only a 2x lens so it hasn’t replaced my need for a Superzoom digital camera, where you can use a 10 to 30x telephoto lens. It also offers 4k video recording at 30 frames per second or 1080p capture at 60 frames per second, and optical image stabilization for video capture; as well as a 7-megapixel front-facing camera.


Overall, the pictures looked great. I thought daylight photos were the clearest I’ve seen yet, with the best color. Low-light pictures also are definitely improved over the iPhone 6s Plus, though a bit noisier than the low-light photos I got using the Google Pixel.


Grand Central night iphone 7+


One interesting new feature, still officially in “beta” but on the phone with the recent versions of iOS is called “portrait mode” which uses the two rear-facing cameras together to create the blurred background or depth effect (known as “bokeh“) you see with many DSLRs. I’ve seen other phones try similar features with mixed results, but I thought it worked pretty well, if not quite up to the standards of top-end cameras.


Another new feature for this year’s model is water resistance, to the IP67 standard, which is certainly appreciated. In some ways, it’s lacking a few of the features that I’ve appreciated on Android phones such as the Samsung Galaxy S7, such as an always-on display, wireless charging, and of course, a micro-SD slot for additional storage.


On the software side, I’ve found iOS 10.1.1 to have its pros and cons. Force touch worked well on the device, and I appreciated the new options in the camera app. On the other hand, I’m not thrilled with threading in the mail app, and have had a number of friends ask me how to turn it off (which isn’t hard). I’ve noticed a few more crashes with iOS 10 than I had with iOS 9.


Siri is getting better over time, but it’s still inconsistent. Sometimes it gives great answers, but I still find it often sends me to Web pages instead of just answering my questions. Apple Maps has improved and usually works quite well. But recently while heading from New York to New Jersey, it told me to make a U-turn in the middle of the George Washington Bridge. I still prefer Google Maps or Waze; of course, you can run these as well, though they aren’t integrated with Siri.


In general, I find Apple’s software to be somewhat better integrated than Android, but the differences have become slimmer over time. Of course, iOS works pretty much the same, no matter which recent iPhone you’re running, assuming you upgraded to the new version. One big difference is that Apple charges more for cloud backups, which is particularly an issue for storing photographs, and I generally find iCloud to be better integrated, but less featured than Google’s equivalents. The iPhone 7 Plus is available in jet black (which remains effectively unavailable), matte black, silver, gold, and rose gold, and in 32, 128, and 256 GB of memory.


Overall, I have a few quibbles, such as the lack of the analog port, and a few crashes. But in general, I’ve found it to be a great phone with a very fast processor, wonderful display, and a terrific camera—in many ways, it’s the best smartphone I’ve yet used.


Here’s PCMag’s full review.

http://www.pcmag.com/article/349831/living-with-an-iphone-7-plus

Continue Reading

Living With a Google Pixel XL

Over the past several weeks, I’ve been travelling with a Google Pixel XL. This is the larger version of the first Pixel phone line from Google, which the company says is different from the previous Nexus phones because Google was more involved in the design. I’m not sure how revolutionary the phone is, but it is certainly a very nice, very competitive Android phone.


I’ve heard a number of people talk about how the Pixel looks more like an iPhone, but I don’t really see that in the physical design, except for the things that all modern smartphones have—a front that is mostly screen, with rounded edges and a camera on the top. It lacks the physical home button below the screen that iPhones and the Samsung Galaxy S7 devices have; instead, it uses a home button on the back, like the Huawei-made Nexus 6 and many LG and HTC models. (Not too surprising, since HTC is said to have actually manufactured the Pixel.) On a larger phone in particular, I think the button on the back arrangement makes it easier to handle with one hand, but it’s not a big differentiator. This button also has the expected fingerprint reader, which I thought worked well, though it might be slightly less reliable than the ones on the iPhone or the Galaxy S7.


The back of the phone features a two-texture look, with a glossy top and a matte bottom; this doesn’t really bother me, but I can’t say it looks quite as high-end as the recent Samsung and Apple phones.


At 6.1 by 3.0 by 0.3 inches, it’s a hair smaller than the iPhone 7 Plus; at 5.93 ounces, it’s definitely lighter than the iPhone 7 Plus’s 6.33 ounces and a bit heavier than the Galaxy S7 Edge’s 5.54 ounces, though you probably won’t be able to tell the difference in daily use. It’s notably smaller than the Nexus 6, which had a 6-inch display compared with the 5.5-inch one used in the Pixel XL.


Indeed, the Pixel XL features a 5.5-inch 2560 by 1440 AMOLED display, which matches the resolution of other top-end Android phones such as the Samsung Galaxy S7 Edge or the LG V20. I thought the display looked great, with colors that really popped. It’s not quite as cool as the S7 Edge’s curved display, but looked quite good.


The phone uses a 2.15 GHz Qualcomm Snapdragon 821 processor with four of the company’s proprietary Kryo cores, plus Adreno 530 graphics, manufactured at 14nm. Compared with the iPhone 7 Plus, it scores a bit slower in some of the benchmark tests, though that’s partially due to the operating systems and to the higher-resolution display (which means more pixels to process). It seems comparable to other high-end Android phones, with 4 GB of RAM.


It has a 3450 mAh battery, and PCMag’s tests show it to last a bit longer than the iPhone 7 Plus, but not as long as the Galaxy S7 Edge. That matches my experience, though I didn’t find the differences to be very dramatic. As with all of them, I generally charge it every night. It does support fast charging, letting you get a good enough charge for several hours of basic use in 15 or 20 minutes.


The Pixel XL has a 12.3-megapixel rear-facing camera, and unlike the iPhone 7 Plus, it does not protrude from the back of the phone. It can take 4K or 1080p video, but unlike some of its competitors, it does not have optical image stabilization. It also has an 8-megapixel front-facing camera.



In general, I thought it took very nice pictures, among the best I’ve seen from Android phones. Daylight pictures looked very good, with bright colors, though I would rate the iPhone 7 Plus a tad higher. (See that review for comparative pictures).


Grand Central Night


I was even more impressed with low-light photos, where I saw noticeably less noise in the photos than I have from other cameras.


While the Pixel is a very strong phone, it does lack a few features that the Galaxy S7 line has, including an always-on display, wireless charging, water resistance, and support for a microSD card to expand storage—something that I’ve come to expect from most high-end Android phones. These features may not be game changers, but I do miss them. Like most Android phones—but unlike the new iPhones—it has an analog phone jack, which is of course quite convenient.


The big thing that sets the Pixel, and previously the Nexus family of devices, apart from other Android phones is that it runs the “pure Google” version of Android. This means it has no special skins and all of the Google apps—Gmail, Photos, Docs, YouTube, etc.—are front and center. And the Pixel should get all the Android updates as soon as they are available.


After updates, the Pixel is running Android 7.1 (Nougat), which offers a somewhat simplified user interface, but otherwise visually isn’t a huge difference from Android 6 (Marshmallow). Google’s own apps and collections now appear as round icons, with the collections doing a better job of showing you what is inside. Most third-party apps still have square icons, though one assumes that will change over time.


One noticeable difference is that it no longer has a separate apps button for getting to all of the applications; rather you slide up from the bottom of the screen to see all the apps. In appearance, this can make your home pages look closer to the look of the iPhone or earlier Huawei phones.


One nice feature Android retains is the ability to add widgets to your home pages. By default, the first home page of the Pixel includes a widget with the weather and date, plus an icon for searching on Google.


As with previous versions, swipe left to see the Google Now page, which shows you “cards” with the most pertinent information, such as upcoming appointments or traffic. Other minor changes include making the alerts that pop down from the bottom of the screen a bit more attractive.


Of course, the most highly touted change is the newly renamed Google Assistant, which you get to by saying “OK, Google” or by long pressing the home button. Like Siri, this assistant does voice recognition and tries to answer your questions. It has certainly improved since earlier versions, and in general, I found it better—though still a long way from perfect—in giving me useful answers.


Many of the Google apps remains quite good, especially Photos, which gives you unlimited storage, something you don’t get with Apple’s iCloud. Overall, even though the apps themselves work quite well Google seems to rely a bit more on the cloud.


Compared with other Android phones, the big difference is that the Pixel offers you an unfiltered path to these apps, and that makes it a bit simpler.


The Pixel comes in three colors—Quite Black, Very Silver, and Really Blue, and in two storage variants, with 32 GB or 128 GB. Technically, only Verizon sells the phone in the U.S., but it’s available unlocked through Google directly; I used that version on Google’s own Project Fi network (which in practice nearly always connected over T-Mobile when using cellular).


Overall, I found Pixel to be a very strong contender. It has a fast processor, very nice screen, and an excellent camera, particularly for low-light photography. It lacks some of the hardware features that make other Android phones stand out, such as a curved display, water resistance, and expandable storage. On the other hand, it gives you a purer, more consistent software experience without some of the extras other vendors might add (which often just get in the way). In short, the Pixel XL holds its own in any discussion of top-end Android phones.


Here’s PCMag’s full review.


http://www.pcmag.com/article/349846/living-with-a-google-pixel-xl

Continue Reading

Why Machine Learning Is the Future

At this month’s SC16 Supercomputing conference, two trends stood out. The first is the appearance of Intel’s latest Xeon Phi (Knights Landing) and Nvidia’s latest Tesla (the Pascal-based P100) on the Top500 list of the fastest computers in the world; both systems landed in the top 20. The second is a big emphasis on how chip and system makers are taking concepts from modern machine learning systems and applying these to supercomputers.


On the current revision of the Top500 list, which gets updated twice yearly, the top of the chart is still firmly in the hands of the Sunway TaihuLight computer from China’s National Supercomputing Center in Wuxi, and the Tianhe-2 computer from China’s National Super Computer Center in Guangzhou, as it has been since June’s ISC16 show. No other computers are close in total performance, with the third- and fourth- ranked systems—still the Titan supercomputer at Oak Ridge and the Sequoia system at Lawrence Livermore—both delivering about half the performance of Tianhe-2.


The first of these is based on a unique Chinese processor, the 1.45GHz SW26010, which uses a 64-bit RISC core. This has an unmatched 10,649,600 cores delivering 125.4 petaflops of theoretical peak throughput and 93 petaflops of maximum measured performance on the Linpack benchmark, using 15.4 Megawatts of power. It should be noted that while this machine tops the charts in Linpack performance by a huge margin, it doesn’t fare quite as well in other tests. There are other benchmarks such as the High Performance Conjugate Gradients (HPCG) benchmark, where machines tend to only see 1 to 10 percent of their theoretical peak performance, and where the top system—in this case, the Riken K machine—still delivers less than 1 petaflop.


But the Linpack tests are the standard for talking about high-performance computing (HPC) and what is used to create the Top500 list. Using the Linpack tests, the No. 2 machine, Tianhe-2, was No. 1 on the chart for the past few years, and uses Xeon E5 and older Xeon Phi (Knights Corner) accelerators. This offers 54.9 petaflops of theoretical peak performance, and benchmarks at 33.8 petaflops in Linpack. Many observers believe that a ban on the export of the newer versions of Xeon Phi (Knights Landing) led the Chinese to create their own supercomputer processor.


Knights Landing, formally Xeon Phi 7250, played a big role in the new systems on the list, starting with the Cori supercomputer at Lawrence Berkeley National Laboratory coming in at fifth place, with a peak performance of 27.8 petaflops and a measured performance of 14 petaflops. This is a Cray XC40 system, using the Aries interconnect. Note that Knights Landing can act as a main processor, with 68 cores per processor delivering 3 peak teraflops. (Intel lists another version of the chip with 72 cores at 3.46 teraflops of peak theoretical double precision performance on its price list, but none of the machines on the list use this version, perhaps because it is pricier and uses more energy.)


Earlier Xeon Phis could only run as accelerators in systems that were controlled by traditional Xeon processors. In sixth place was the Oakforest-PACS system of Japan’s Joint Center for Advanced High Performance Computer, scoring 24.9 peak petaflops. This is built by Fujitsu, using Knights Landing and Intel’s Omni-Path interconnect. Knights Landing is also used in the No. 12 system (The Marconi computer at Italy’s CINECA, built by Lenovo and using Omni-Path) and the No. 33 system (the Camphor 2 at Japan’s Kyoto University, built by Cray and using the Aries interconnect).



Nvidia was well represented on the new list as well. The No. 8 system, Piz Daint at The Swiss National Supercomputing Center, was upgraded to a Cray XC50 with Xeons and the Nvidia Tesla P100, and now offers just under 16 petaflops of theoretical peak performance, and 9.8 petaflops of Linpack performance—a big upgrade from the 7.8 petaflops of peak performance and 6.3 petaflops of Linpack performance in its earlier iteration based on the Cray XC30 with Nvidia K20x accelerators.


The other P100-based system on the list was Nvidia’s own DGX Saturn V, based on the company’s own DGX-1 systems and an Infiniband interconnect, which came in at No. 28 on the list. Note that Nvidia is now selling both the processors and the DGX-1 appliance, which includes software and eight Tesla P100s. The DGX Saturn V system, which Nvidia uses for internal AI research, scores nearly 4.9 peak petaflops and 3.3 Linpack petaflops. But what Nvidia points out is that it only uses 350 kilowatts of power, making it much more energy efficient. As a result, this system tops the Green500 list of the most energy-efficient systems. Nvidia points out that this is considerably less energy than the Xeon Phi-based Camphor 2 system, which has similar performance (nearly 5.5 petaflops peak and 3.1 Linpack petaflops).


It’s an interesting comparison, with Nvidia touting better energy efficiency on GPUs and Intel touting a more familiar programming model. I’m sure we’ll see more competition in the years to come, as the different architectures compete to see which of them will be the first to reach “exascale computing” or whether the Chinese home-grown approach will get there instead. Currently, the US Department of Energy’s Exascale Computing Project expects the first exascale machines to be installed in 2022 and go live the following year.


I find it interesting to note that despite the emphasis on many-core accelerators like the Nvidia Tesla and Intel Xeon Phi solutions, only 96 systems use such accelerators (including those that use Xeon Phi alone); as opposed to 104 systems a year ago. Intel continues to be the largest chip provider, with its chips in 462 of the top 500 systems, followed by IBM Power processors in 22. Hewlett-Packard Enterprise created 140 systems (including those built by Silicon Graphics, which HPE acquired), Lenovo built 92, and Cray 56.


Machine Learning Competition


There were a number of announcements at or around the show, most of which dealt with some form of artificial intelligence or machine learning. Nvidia announced a partnership with IBM on a new deep-learning software toolkit called IBM PowerAI that runs IBM Power servers using Nvidia’s NVLink interconnect.


AMD, which has been an afterthought in both HPC and machine-learning environments, is working to change that. In this area, the company focused on its own Radeon GPUs, pushed its FirePro S9300 x2 server GPUs, and announced a partnership with Google Cloud Platform to enable it to be used over the cloud. But AMD hasn’t invested as much in software for programming GPUs, as it has been emphasizing OpenCL over Nvidia’s more proprietary approach. At the show, AMD introduced a new version of its Radeon Open Compute Platform (ROCm), and touted plans to support its GPUs in heterogeneous computing scenarios with multiple CPUs, including its forthcoming “Zen” x86 CPUs, ARM architectures starting with Cavium’s ThunderX and IBM Power 8 CPUs.


Intel AI Portfolio


At the show, Intel talked about a new version of its current Xeon E5v4 (Broadwell) chip tuned for floating point workloads, and how the next version based on the Skylake platform is due out next year. But in a later event that week, Intel made a series of announcements designed to position its chips in the artificial intelligence or machine-learning space. (Here’s ExtremeTech’s take.) Much of this has implications for high-performance computing, but is mostly separate. To begin with, in addition to the standard Xeon processors, the company also is promoting FPGAs for doing much of the inferencing in neural networks. That’s one big reason the company recently purchased Altera, and such FPGAs are now used by companies such as Microsoft.


But the focus on AI last week dealt with some newer chips. First, there is Xeon Phi, where Intel has indicated that the current Knights Landing version will be supplemented next year with a new version called Knights Mill, aimed at the “deep learning” market. Announced at IDF, this is another 14nm version but with support for half-precision calculations, which are frequently used in training neural networks. Indeed, one of the big advantages of the current Nvidia chips in deep learning is their support for half-precision calculations and 8-bit integer operations, which Nvidia often refers to as deep learning “tera-ops.” Intel has said Knights Mill will deliver up to four times the performance of Knights Landing for deep learning. (This chip is still slated to be followed later by a 10nm version called Knights Hill, probably aimed more at the traditional high-performance computing market.)


Most interesting for next year is a design from Nervana, which Intel recently acquired, which uses an array of processing clusters designed to do simple math operations connected to high-bandwidth memory (HBM). First up in this family will be Lake Crest, which was designed before Intel bought the company and manufactured on a 28nm TSMC process. Due out in test versions in the first half of next year, Intel says it will deliver more raw compute performance than a GPU. This will eventually be followed by Knights Crest, which somehow implements Nervana’s technology alongside Xeon, with details still unannounced.




“We expect Nervana’s technologies to produce a breakthrough 100-fold increase in performance in the next three years to train complex neural networks, enabling data scientists to solve their biggest AI challenges faster,” wrote Intel CEO Brian Krzanich.


Intel also recently announced plans to acquire Movidius, which makes DSP-based chips particularly suited for computer vision inferencing—again, making decisions based on previously trained models.


It’s a complicated and evolving story—certainly not as straightforward as Nvidia’s push for its GPUs everywhere. But what it makes clear is just how quickly machine learning is taking off, and the many different ways that companies are planning to address the problem, from GPUs like those from Nvidia and AMD, to many core x86 processors such as Xeon Phi, to FPGAs, to specialized products for training such as Nervana and IBM’s TrueNorth, to custom DSP-like inferencing engines such as Google’s Tensor Processing Units. It will be very interesting to see whether the market has room for all of these approaches.

http://www.pcmag.com/article/349936/why-machine-learning-is-the-future

Continue Reading