Is Gigabit LTE in your future?

While 5G was everywhere at Mobile World Congress, we still don’t have a standard, and it will take even longer before we have phones that support it. On the other hand, phones which support a gigabit connection over the existing LTE standard were unveiled at the show, promising faster connections.


One thing to understand about Gigabit LTE connections is that they require two things—a phone with a modem that supports an advanced amount of Carrier Aggregation or CA (the ability to use multiple groups of spectrum at the same time)—and a network that supports these connections, meaning that it has the spectrum available. Of course, your data plan will likely limit how many high-speed connections you can use. But the idea isn’t really to be downloading content at a gigabit per second for an extended period of time, but rather to get the content you need very quickly and then get off the network. This reduces the load on the network and boosts network capacity, as well as enables your modem to stop transmitting, which saves power.


Qualcomm introduced the first “Gigabit LTE” modem, the Snapdragon X16 chipset, in the run-up to last year’s show; this technology is now part of the Snapdragon 835 application processor.


This technology includes support for what is known as 256-QAM digital signal processing, meaning it can pack more bits per transmission; support for 4X4 MIMO, so it can receive data on four antennas, as well as support up to four 20MHz blocks of spectrum using Carrier Aggregation (4x20MHz CA). This modem supports both licensed and unlicensed spectrum using LTE-U (for LTE Unlicensed), which is supported by a variety of operators in the U.S., Korea, India and other markets.


Technically, it also supports LTE-Advanced, specifically LTE Category 16 for downloads, with a theoretical peak of 1 gigabit per second, and Category 13 for uploads, with a theoretical peak of 150 megabits per second. (Note that the current Snapdragon 820/821 used in many of today’s top phones uses the company’s X12 modem, which has a theoretical capability of downloads at 600Mbps. In the real world, congestion and spectrum get in the way.)


Qualcomm Gigabit LTE test


A couple of companies demonstrated phones using the Qualcomm Snapdragon 835 at the show, including Sony, which showed its Xperia XZ premium, due out in June. Qualcomm demonstrated this on the show floor, with downloads of just less than 1Gbps.


LG G6 (MWC Embargo)


ZTE also had its own speed tests on the show floor. Still, it is widely expected that the first phone that will widely ship with this processor is the upcoming Samsung Galaxy S8.


Meanwhile, the big networking providers to the telecom industry are also pushing this technology, and both Nokia and Ericsson talked it up at the show.


Last week, Sprint said it was actually debuting a network that supports Gigabit LTE in New Orleans, using three-channel carrier aggregation and 60MHz of Sprint’s 2.5GHz spectrum; they demonstrated this with an unannounced phone from Motorola Mobility that uses a Snapdragon 835. T-Mobile has also said it plans to roll out a gigabit-capable network in the U.S. later this year, and AT&T has said it expects that some of its sites will also be able to reach that speed sometime this year.


Qualcomm is no longer alone in the space. The Samsung Exynos 8895, which is now being marketed as the Exynos 9, also includes its own gigabit modem that supports Category 16 downloads with a theoretical maximum of 1 Gbps using 5 carrier aggregation and Category 13 uploads of up to 150Mbps uplink using 2CA (Cat 13). Again, this is manufactured in Samsung’s 10nm process and expected to ship with the international versions of the Galaxy S8.


Intel Gigabit LTE modem


In addition, in the run-up to this year’s show, Intel announced its XMM 7560 modem, which supports Category 16 for downloads with a peak theoretical speed of 1Gbps, and Category 13 for uploads with speeds up to 225Mbps. The first modem to be built on Intel’s 14nm technology, it enables 5x 20MHz CA for downloads, 3x 20 MHz CA for uploads, 4X4 MIMO and 256-QAM, and works on both licensed spectrum and unlicensed spectrum, where it coexists with Wi-Fi using a technology called License Assisted Access (LAA), which is primarily used by carriers in Europe and Japan. Intel expects samples in the first half of this year and a move into production soon afterward.


Not to be outdone, Qualcomm showed off its X20 modem, which is even faster, with a theoretical peak speed of 1.2Gbps, support for LTE Category 18 and 5 x 20MHz CA across licensed and unlicensed spectrum using both LAA and LTE-U. The X20 also supports the 3.5GHz Citizens Broadband Radio Service in the U.S. This modem is built on a 10nm process, and Qualcomm says it has begun sampling to customers, with the first commercial devices expected in the first half of 2018. It would seem highly likely that this modem will find its way into an applications processor around that time as well.


You may not need all this speed right now, but it seems necessary to support higher resolution video and VR applications. In the meantime, if it helps deliver content more quickly and can improve the networks, that’s a win for everyone.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/352395/is-gigabit-lte-in-your-future

Continue Reading

Barcelona Plans The World’s Most Diverse Supercomputer

These days, there are a number of different approaches to high-performance computing, systems usually referred to as supercomputers. Most of these systems use a massive number of Xeon processors, but we are starting to see the most interesting new machines run accelerators, such as Nvidia’s Tesla or Intel Xeon Phi. There’s even some talk that massive ARM-based systems could be effective in the future. But what if you could try all of these architectures in one location?


That’s the challenge and promise of the new MareNostrum 4 computer, which is being readied for installation at the Barcelona Supercomputing Center. The new design includes a main system for general-purpose use based on traditional Xeons, plus three new emerging technology clusters, based on IBM Power and Nvidia, Xeon Phi, and ARM-based computing. While I was in Barcelona for Mobile World Congress, I had a chance to talk to Sergi Girona, Operations Director for the BSC, who explained the reasoning behind the four different clusters.


Girona said the center’s main mission is to provide supercomputing services for Spanish and other European researchers, in addition to industry. As part of this mission the center wants to have at least three “emerging tech clusters,” so it can test different alternatives.


For the general computing cluster, Girona says the center chose a traditional Xeon design because it was easier to migrate applications that run on the current MareNostrum 3, slated to be disconnected next week. The design also had to fit the existing space, within a chapel. (I visited the center last year and the current supercomputer a year ago.)



The new design, to be built by Lenovo, will be based on the new Xeon v5 (Skylake), with 3,456 nodes, each with two sockets, and each chip will contain 24 cores each, for a total theoretical peak performance of 11.14 petaflops per second. Most cores will have 2GB of memory, but 6 percent will have 8GB, for a total of 331.7TB of RAM. Each node will have a 240GB SSD, though eventually some will have 3D XPoint memory, when that is available. The nodes are to be connected via Intel’s Omni-Path interconnect and 10GB Ethernet. The system will also have six racks of storage from IBM, with 15 petabytes of storage, including a mix of flash and hard disk drives. Overall, the design will take up 62 racks—48 for computing, 6 for storage, 6 for networking, and 2 for management. It will fill 120 square meters (making for a very dense environment) and draw 1.3 megawatts of power, up from the 1 megawatt drawn by the previous design. Operation is expected to begin on July 1.


MareNostrum 1-2-3-4


One thing I found interesting here is how clearly the move to the new generation demonstrates the progression of technology. The previous generation had a peak performance of about 1 petaflop, and this system should be more than 10 times faster, while using only 30 percent more power. For comparison, the original MareNostrum supercomputer, installed in 2004, had a peak performance of 42 teraflops and used 640 kilowatts of power. (The details of performance improvements over four generations of MareNostrum are in the chart above). Girona says this means that what would have taken a year to run on the MareNostrum 1 can be done in a single day on the new system. Pretty impressive.


For emerging technology, the site will have three new clusters. One will consist of IBM Power 9 processors and Nvidia GPUs, designed to have a peak processing capability of over 1.5 Petaflop/s. This cluster will be built by IBM, and involves the same type of design being deployed in the Summit and Sierra supercomputers, which the US Department of Energy has commissioned for the Oak Ridge and Lawrence Livermore National Laboratories as part of its CORAL Collaboration at Oak Ridge, Argonne, and Lawrence Livermore national labs.


The second cluster will be made up of Intel Xeon Phi processors, with Lenovo building a system that uses the forthcoming Knights Hill (KNH) version and OmniPath, with a peak processing capability over 0.5 Petaflop/s. This also mimics the American CORAL program, and uses the same processors that will be inside the Aurora supercomputer, commissioned by the US Department of Energy for the Argonne National Laboratory.


Finally, a third cluster will be formed of 64-bit ARMv8 processors that Fujitsu will provide in a prototype machine, which is designed to use the same processors that Fujitsu is developing for a new Japanese system to supplant the current K supercomputer. This too should offer more than 0.5 Petaflop/s of peak performance. The exact timing for the beginning of operations on the emerging clusters has yet to be disclosed, Girona said.


Overall, the system will cost $34 million, in a contract won by IBM and funded by the Spanish government. One major reason for having all four types of computing on site is research, Girona said. The center, which employs 450 people in total, has 160 researchers with a focus on computer science, including architecture and tools. In particular, as a member of PRACE (Partnership for Advanced Computing in Europe), BSC is trying to focus on leading performance optimization and parallel computing.


Girona said that BSC wants to influence the development of new technologies, and is planning on using the new machine to analyze what will happen in the future, in particular to make sure that software is ready for whatever architecture the next machine—likely to arrive in about 3 years—will have. BSC has long worked on tools for emerging architectures, he noted.


Another topic researchers are considering is whether or not it would be worth developing a European processor for IT, likely based on the ARM architecture.


Barcelona won’t have the fastest supercomputer in the world; that record is currently held by the Chinese, with the Americans and Japanese trying to catch up. But MareNostrum 4 will be the most diverse, and potentially the most interesting.

http://www.pcmag.com/article/352274/barcelona-plans-the-worlds-most-diverse-supercomputer

Continue Reading

The 10nm Processors of MWC 2017

One of the things that stood out at this year’s Mobile World Congress was the presence of three new mobile application processors—from MediaTek, Qualcomm, and Samsung—that all use new 10nm FinFET manufacturing processes, which promise smaller transistors, faster peak performance, and better power management than the 14 and 16nm processes used in all of the current top-end phones. During the show, we got more details on these new processors, which should start appearing in phones over the next couple of months.


Qualcomm Snapdragon 835


Qualcomm had announced the Snapdragon 835 prior to CES, but at Mobile World Congress, we were able to see the processor in a couple of phones, notably the Sony Xperia XZ Premium, due out in June, as well as an unspecified (but publically demoed) ZTE “Gigabit phone.”


Qualcomm has said the 835 was the first 10nm product to enter production, manufactured on Samsung’s 10nm process. It is widely expected to be in the U.S. versions of the Samsung Galaxy S8, which will be unveiled on March 29.


The Snapdragon 835 uses Qualcomm’s Kryo 280 CPU core cluster, with four performance cores running at up to 2.45GHz with 2 megabytes of level 2 cache and four “efficiency” cores running at up to 1.9GHz. The company estimates that 80 percent of the time the chip will use the lower-power cores. While Qualcomm wouldn’t go into a lot of detail on the cores, the company said that rather than create completely custom cores, the cores are instead enhancements of two different ARM designs. It would make sense that the larger cores are a variation on the ARM Cortex-A73 and the smaller ones on the A53, but in meetings at MWC, Qualcomm stopped short of confirming that.


In talking about the chip, Keith Kressin, SVP of Product Management at Qualcomm Technologies, stressed that power management was a major focus, as it allows for sustained performance. But he also stressed the other features of the chip, which uses Adreno 540 graphics that feature the same basic architecture as the Adreno 530 did in the 820/821, but here provides a 30 percent improvement in performance. It also includes a Hexagon 628 DSP, including support for TensorFlow for machine learning, as well as an improved image sensor.


New in the processor in the company’s Haven security module, which handles such things as multifactor authentication and biometrics. Kressin stressed that what’s important is how all of this works together, and noted that wherever possible the chip will use the DSP, then the graphics, then the CPU. The CPU is actually “the core we least want to use,” he said.


One of the most notable features is the integrated “X16” modem, capable of gigabit download speeds (by using carrier aggregation on three 20 MHz channels) and upload speeds of 150 megabits per second. Again, this should be the first modem to ship that’s capable of such speeds, albeit only in markets where the wireless providers have the right spectrum. It also supports Bluetooth 5 and improved Wi-Fi.


Kressin said the processor will enable 25 percent better battery life than the previous 820/821 chips (manufactured on Samsung’s 14nm process) and will include Quick Charge 4.0 for faster charging.


At the show, the company announced a VR development kit and gave more details about how the chip will better handle VR and augmented reality applications, with an emphasis on improved function in stand-alone VR systems.


It is likely that the Snapdragon 835 will appear in a lot of phones over the course of the year. Kressin said the company was “hitting target yields today” and that it will ramp throughout the year.


Samsung Exynos 8895


Samsung LSI hasn’t been as public about its chip, but used MWC as a way of putting a more public face on its product lines, which will now be branded Exynos 9 for processors aimed at the premium market, and Exynos 7, 5, and 3 for high-end, mid-tier, and low-end phones. The company also makes ISOCELL image sensors and a variety of other products.


Samsung LSI just announced the first Exynos 9 processor, technically the 8895, which will also be its first processor produced on the company’s 10nm FinFET process, which it says provides 27 percent improved performance on 40 percent lower power than its 14nm node. The 8895 is widely expected to be in international versions of the Galaxy S8, though we’re unlikely to see it in the U.S., as the internal modem doesn’t support the older CDMA network used by Verizon and Sprint.


Like the Qualcomm chip, Samsung’s has eight cores in two groups. The four high-end cores use the company’s second generation custom cores, and Samsung said these are ARMv8 compatible but have an “optimized microarchitecture for higher frequency and power efficiency,” though it wouldn’t discuss the differences in any more detail. For graphics, it uses ARM Mali-G71 MP20, which means it has 20 graphics clusters, up from 12 in the 14nm 8890, used in some of the international Galaxy S7 models. This should allow for faster graphics, including 4K VR at up to a 75Hz refresh rate as well as support for video recording and playback of 4K content at 120fps.


The two CPU clusters and the GPU are connected using what the firm calls the Samsung Coherent Interconnect (SCI), which enables heterogeneous computing. And it also includes a separate vision processing unit, designed for face and scene detection, video tracking, and things like panoramic pictures.


The product also includes its own gigabit modem, which supports Category 16, and has a theoretical maximum of 1 Gbps downlink (Cat 16, using 5-carrier aggregation) and 150Mbps uplink using 2CA (Cat 13). It will support 28-megapixel cameras or a dual-camera setup with 28 and 16 megapixels.


Exynos 9 VR Demo


The firm said this allows for better stand-alone VR headsets, and demonstrated a stand-alone headset with 700 pixels-per-inch resolution. I thought the display was notably sharper than on the commercial VR headsets I’ve seen to date, though I still had a bit of the screen-door effect; what stood out was how fast the reaction time seemed given the higher resolution.


MediaTek Helio X30


Mediatek Helix X30 Overview


MediaTek had announced its 10-core Helio X30 last fall, but at the show the company said its 10nm chip had entered mass production and should be in commercial phones in the second quarter of this year.


We got a lot more technical detail at last month’s International Solid States Circuit Conference (ISSCC), but the highlights remain interesting, as this seems likely to be the first chip out using TSMC’s 10nm process.


Mediatek Helio X30 CPU Architecture


The key difference with this processor is its “tri-cluster” deca-core CPU architecture, featuring two 2.5 GHz ARM Cortex-A73 cores for high performance, four 2.2 GHz A53 cores for less demanding tasks, and four 1.9 GHz A35 cores that run when the phone is only doing light duty. These are connected by the firm’s own coherent system interconnect, called MCSI. A scheduler, known as Core Pilot 4.0, manages the interactions among these cores, turning them on and off and working to manage thermals and user experience items such as frames-per-second in order to deliver consistent performance.


As a result, the company says the X30 gets a 35 percent improvement in multi-threaded performance, and a 50 percent improvement in power, compared with last year’s 16nm Helio X20. That’s notably better than what the company claimed at the introduction. In addition, graphics have been improved, and the chip now uses a variation of the Imagination PowerVR Series 7 XT, running at 800MHz, which it says works at the same level as the current iPhone, delivering 2.4 times the processing power using 60 percent less power.


The chip has a Category 10 LTE modem, which supports LTE-Advanced, 3-carrier aggregation downloads (for a maximum theoretical download speed of 450Mbps), and 2-carrier aggregation uploads (for a maximum of 150 Mbps).


While MediaTek said its modems are certified in the U.S., you’re unlikely to see this chip in many phones in this market. That’s because it’s aimed at “sub-flagship” models and Chinese OEMs.


I asked Finbarr Moynihan, MediaTek’s General Manager of Corporate Sales, where application processors go from here, and he said that he expects more focus on user experience, and things such as smooth performance, fast charging, the camera, and on video features.


ARM Looks Forward


At the show, ARM announced that it had acquired two companies, Mistbase and NextG-Com, for software and hardware intellectual property that meets the NB-IoT standard which was part of 3GPP release 13. The company said it would include these technologies in its Cardio-N family of solutions for the Internet of Things. What ARM hasn’t done is announce anything beyond the A73, A53, and A35, which according to John Ronco, VP of Product Marketing for ARM’s CPU group, it sees as sustaining technologies going forward. But the A73 was designed for 10—28nm processes, and he suggested that a future core could target 16nm processes. While he admitted that each generation of process technology has its challenges, he pointed to work at GlobalFoundries, Samsung, and TSMC as proof that we haven’t reached the end of Moore’s Law. Looking forward, he echoed much of what the processor makers themselves say, and said that “efficiency is the key” to get the kind of user interface customers want.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/352240/the-10nm-processors-of-mwc-2017

Continue Reading

MWC 2017: The Sensors and Components That Could Make Your Next Smart Phone Even Smarter

When I attend trade shows such as Mobile World Congress, I like to spend at least a little time checking out what the makers of the various components are up to. After all, in many cases it’s the various sensors and similar components that will tell us the features that may be in the next generation of smartphones. At this year’s show, I noticed components to enable features from improved positioning, to measuring your blood pressure, to checking air quality. Some of the sensors are simply improvements on already commonplace features, while others are new and may never become mainstream, but it’s interesting to see what could happen.


Looking first at features that are now more common, I talked with a number of companies that make accelerometers, gyroscopes, and similar sensors that enable our phones to tell us how many steps we’ve taken, etc. In general, these products are MEMS, or tiny electro-mechanical devices (as opposed to the standard transistors that make up most of the electronics in a device). Such technology is not only used in phones, but is often used in other devices, like wearables and drones.


Invensense talked about how it makes 3-axis accelerometers, 6-axis (which add a gyroscope), 9-axis (which add a compass), and other similar sensors. Some add a barometer to detect pressure, which is used for doing things like determining when you are climbing stairs. Its competitors include two much larger companies, ST and Bosch.


One interesting idea I heard from Invensense involves using a 9-axis accelerometer to augment GPS to improve function inside buildings, or to save battery by switching the GPS radio off. In addition, the firm showed an impressive electronic image stabilization solution to allow for smoother video capture.


Bosch sensors


Bosch was also showing 9-axis “absolute orientation” sensors, barometric pressure sensors, and integrated chips that measure pressure, humidity, and temperature. ST had many of the same concepts, but I was also intrigued by the idea of a MEMS-based speaker for earphones, instead of the conventional solutions.


Hamamatsu


Both ST and Hamamatsu had sensors for measuring distance, with the latter showing photonics-based devices for things such as time of flight image sensors and measuring blood flow.


Vkansee Fingerprint sensor


Another sensor we’ve grown accustomed to in current phones is the fingerprint scanner. Vkansee showed a fingerprint sensor working under glass that can be embedded in a screen. The company demonstrated this inside a Lenovo laptop, though technically one could imagine it in a phone or tablet as well.


I saw a similar concept from Synaptics at CES earlier this year.


But if these sensors were typical, some of the others were much more unusual.


Consumer Physics Changhong H2


I was intrigued a couple of years ago when I first saw Consumer Physics, which had developed SCiO, a molecular sensor that can analyze things such as food or drinks and measure a variety of characteristics like calories, fats, sugars, and proteins. Now working with Analog Devices, the company has developed a platform that can be integrated into a phone. At the show, it demonstrated the Changhong H2, which is due to be available shortly from the Chinese smartphone maker, and said that a “Tier 1” smartphone maker would be including its sensor in a phone in early 2018.


LMD blood pressure


Leman Micro Devices showed a sensor that could be incorporated into a phone to measure your blood pressure, heart rate, blood oxygen and more. In a demo, you squeezed the knuckle of your finger against a sensor on the edge of the phone. The company said it had a major phone provider as an investor, and suggested the sensor could be in a phone next year.


CrucialTec gas analyzer


Another interesting idea came from CrucialTec, which makes a wide variety of sensors, including fingerprint sensors. They showed some of those plus sensors for heart rate monitoring, non-contact thermometers, and a gas analyzer that can measure contaminants in the air. I could see where these could be quite useful, whether in a phone or a stand-alone device.


Again, we may or may not see these sensors in mainstream phones anytime soon, but they give us an idea of what kind of functions could be added to the phones of the future.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/352156/mwc-2017-the-sensors-and-components-that-could-make-your-n

Continue Reading

MWC 2017: What We Learned About Smartphones

Over the past few years, it’s become harder and harder for the makers of Android phones to differentiate themselves. Every new Android phone features a pretty fast processor, relatively current version of the operating system, and a design that is dominated by a large rectangular screen. Some vendors have tried putting heavy user interface improvements on top of Android, but mostly the market reaction has been negative. So I’m always interested in what the different makers come up with each year to try to make their offerings really stand out. The phones I saw at Mobile World Congress this year mostly stood out because of innovations in two areas—the screen and the camera.


The tension of looking for innovation but being constrained by the basic assumptions of Android phones was notable in many of the introductions. LG’s Juno Cho, CEO of LG’s mobile business, pretty much admitted that its emphasis on a modular line with last year’s G5 was a failure, though he said LG was “still proud to have made that effort.” But he said the competition in phones now is all about “usability.”


In announcing a March 29 launch for the next Galaxy smartphone, David Lowes, CMO of Samsung Electronics Europe, said the company still had a willingness to “push boundaries” and introduce “innovation of a kind this industry hasn’t seen in a long time. But he had to start his presentation by recapping what the company was doing to address the battery overheating issues that the Note 7 experienced.


LG G6


LG had what I thought was the most surprising innovation this year, introducing the LG G6 with a 2:1 ratio display, what the company called 18:9 ratio or “FullVision,” instead of the 16:9 ratio that is now standard (on phones and most laptops and TVs). Effectively what this means is that you can have a phone that is as wide as one with a 5.2-inch display, but still with a 5.7-inch display, so that it goes longer when you hold it vertically. I only was able to take a quick look at this during the introduction, but it appeared it could really be a better format for web browsing, email, and Facebook—things I spend a lot of time doing on my phone. The company also has software designed to make video look right at that dimension, but I’d really like to see how well that works with typical content. It’s interesting that LG is first with this format—it looks like the electronics company leveraged its connection with sibling LG Display. Here’s PC Mag’s preview.


Sony Xperia XZ Premium 4K HDR


Another change relating to the screens: displays are simply better, in particular the coming of HDR support to many, following the trend we’ve seen in the TV market. Sony showed off a 4K HDR display panel (2,160 by 3,840 pixels in a 5.5-inch display) in its Xperia XZ Premium, due in June. The company announced a partnership with Amazon Prime for streaming 4K HDR video.


The LG G6 also supports HDR, with support for both HDR10 and Dolby Vision, with the firm announcing Dolby Vision support from both Amazon Prime and Netflix. The other big differentiating feature is photography, continuing a trend of the past couple of years.


I was particularly impressed with some of the new photo features in the Sony Xperia XZ Premium. This comes with a 19-megapixel memory-stacked Exmor RS sensor similar to that used on premium compact cameras. This camera system, which Sony calls “Motion Eye” has some interesting features including a “super slow motion” feature that looked great and the ability to pre-fetch three photos before you push the capture button, using the memory on the sensor. If you’ve ever just missed the shot—which we all have—I can imagine that being quite useful. Here’s PCMag’s Hands-On.


Huawei P10


Huawei introduced its flagship P10 and P10 Plus models, and the whole focus was its “Leica style portraiture” with the smartphone maker talking about its partnership with the camera vendor. This features a dual rear-camera setup that is becoming increasingly common, in this case with a Summilux H lens with f/1.8 support with one 20-megapixel monochrome sensor and one 12-megapixel RGB sensor, with OIS (optical image stabilization) and 4K video recording. The company says this will enable better low-light performance, and improved bokeh (with background blur, as with typical SLR photography). It also has a 3D-sensing feature that can determine the location of a specific object.


That sounds good but what’s more unusual is the involvement of Leica on the 8-megapixel front-facing camera as well, with software to detect if it’s a single person selfie or a group shot and adjusting the photo appropriately. I’m not much of a selfie taker, but I can see where people will like this. Here’s PCMag’s Hands-On.


It’s a shame that we don’t see much of Sony or Huawei in the U.S. market, as these devices show some real improvements, particularly in the photo area. The LG G6 also had some interesting photo features, including a number of options for square photos, including a guided shot for duplicating the composition of another photo, a match shot for two side-by-side photos, and a grid shot where you combine 4 photos automatically. I can see a lot of Instagram users liking this.


Moto G5 Plus


Of course, there were also a lot of mid-range phones that aren’t really breaking new ground. Lenovo’s Motorola group launched the new Moto G5 and G5 Plus (above), the latter of which looks like a good new entrant in the U.S. market. (Here’s PCMag’s Hands-On.) HMD Global, which now controls the Nokia name, announced a slew of new Nokia branded phones. (Here’s PCMag’s Hands-On.)


ZTE Gigabit phone


In the next few months we’ll see a new generation of application processors built on new 10nm processes, but the phones based on these aren’t quite ready yet. Qualcomm pushed its Snapdragon 835, saying it expects many phones to adopt it later in the year. Sony announced that for its Xperia ZX, though it isn’t expected until June. ZTE pre-announced plans for a “gigabit phone,” and strongly hinted that its next Axon phone will use the 835. It still seems likely that first major phone to actually ship with this processor will be the Samsung Galaxy S8. Recently, Samsung announced its own 8895 processor, similarly built on its 10nm process, while MediaTek announced its Helix X30 with 10 cores, planned for TSMC’s 10nm process, and expected to follow shortly. Again, we didn’t see phones based on these yet, but I’m expecting them later in the year, and I’ll have more thoughts on the processors later. (Meanwhile, the LG G6 is based on the current but still strong Qualcomm Snapdragon 821, while the Huawei P10 is based on the firm’s own Kirin 960, a 16nm design that debuted a few months ago in the Mate 9.)


Recently, Samsung announced its own 8895 processor, similarly built on its 10nm process, while MediaTek announced its Helix X30 with 10 cores, planned for TSMC’s 10nm process, and expected to follow shortly. Again, we didn’t see phones based on these yet, but I’m expecting them later in the year. I’ll have more thoughts on the processors later. (Meanwhile, the LG G6 is based on the current but still strong Qualcomm Snapdragon 821, while the Huawei P10 is based on the firm’s own Kirin 960, a 16nm design that debuted a few months ago in the Mate 9.)


Of course, there are always people who are happier with the past, and to that end, HMD Global re-introduced the classic Nokia 3310. It’s not what I want for my phone these days, but it does bring back memories.


I’d rather look forward than back, and it’s great to see a variety of companies taking another look at what it takes to really change the smartphone.

http://www.pcmag.com/article/352047/mwc-2017-what-we-learned-about-smartphones

Continue Reading

Living With a Huawei Honor 6X | Michael Miller

With a list price of $250, I wasn’t expecting my experience with the Huawei Honor 6X smartphone to be as good as the experience one would have with one of the flagship Android phones like the Google Pixel XLSamsung Galaxy S7Huawei’s own Mate 9Honor 8bokeh. Huawei stresses that the phone uses a Sony IMX386 Exmor RS sensor, which gives it faster focusing and larger pixels (1.25 square microns) than the typical smartphone. In addition, the Honor 6X includes an 8MP front-facing camera with a wide-angle lens, designed for taking selfies.


In general use, I found the camera to be pretty fast and pretty good in most situations, if not quite up to the best of the higher-end phones. See, for example, the photo of Grand Central Terminal above—it’s nice, but not quite as sharp as what I was able to get with some other cameras. Still, for a typical landscape, portrait, or selfie, the Honor 6X takes pictures you’d be quite happy to view, print, or post on social media.


Honor 6X


The dual camera really comes into play when taking portraits. You get to this feature by choosing the wide-aperture selection from the photo menu; after the photo is taken, you can then adjust the focus point. The feature isn’t perfect—I’ve yet to see a smartphone that can really do this as well as a professional DSLR with a great lens—but it’s far better than I’ve noticed with other cameras in this price range. I was pretty impressed.


I found low-light photography to be okay, if a bit noisy. The “night shot” mode can improve on this significantly, but I found that it really only works if you’re using a tripod or stand, since the mode requires the phone be held steady for 20 seconds or so. The camera also has an interesting time-lapse option and a “light-painting” mode for things like capturing the trails of light from moving cars. There are a variety of filters, a popular feature I personally rarely use.


Honor 6X camera


Like the Honor 8X, the 6X includes Huawei’s EMUI 4.1 user interface, a relatively heavy overlay on top of Android 6.0 Marshmallow. I found it pretty usable, though I can’t say it added much to the basic Android experience. As with many of the Android phones, my experience with the built-in email and calendar applications has been less than ideal. (Of course, you can download others). By default, the 6X does not include the Google Assistant, though it does have the voice-activated Google Now interface.


Overall, what really impressed me was that this is a phone that costs less than half that of the top-end phones and yet looks and behaves similarly. It doesn’t have the cool look of an iPhone or the fancy colored back of the Honor 8 or the Galaxy S7, but the 6X does the job with style, and with bonus features—notably the dual camera—to spare.


Here’s PCMag’s review.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/commentary/351924/living-with-a-huawei-honor-6x

Continue Reading

Explore the Highlights of the Solid-State Circuits Conference (ISSCC)

We’ve heard a lot about Moore’s Law slowing lately, and while that does seem to be true in some cases, in other parts of the semiconductor business, there is ongoing progress. At last week’s International Solid-State Circuits Conference (ISSCC), the big chip trends seemed to be around deploying new materials, new techniques, and new ideas to keep pushing transistor density higher and improving on power efficiency. Of course, that isn’t really news. We saw this reflected in talks about producing logic chips on new 7nm processes, on creating 512Gb 3D NAND chips, and on a variety of new processors.


Chip designers are considering new structures and materials for transistors, as shown in the slide above from TSMC. There were also plenty of discussions of new tools for making the transistors, including lithography advances such as EUV and directed self-assembly, and new ways of packaging multiple die together.


Before digging into the details, it remains pretty amazing to me just how far the chip industry has come and just how pervasive chips have become in our daily lives. Texas Instruments CTO Ahmad Bahai noted in his presentation that in 2015, the industry sold an average of 109 chips for every person on the planet. His talk focused on how instead of markets dominated by a single application—first PCs, then cell phones –the industry now needs to be more focused on “making everything smarter,” as different kinds of chips find their way into a huge number of applications.


The industry faces big challenges, though. The number of companies that can afford to build leading-edge logic fabrication plants has shrunk from twenty-two at the 130nm node to just four companies today at the 16/14nm node (Intel, Samsung, TSMC, and GlobalFoundries), with new process technology costing billions to develop, and new plants costing even more. Indeed, last week Intel said it would spend $7 billion to develop 7nm at a shell of a fab it built a few years ago in Arizona.


Still, there were a number of presentations on various companies’ plans to move to 10nm and 7nm processes.



TSMC has rolled out its 10nm process, and the first chip announced was the Qualcomm Snapdragon 835, which is due out shortly. TSMC may be the farthest along at actually commercializing what it calls a 7nm process, and at ISSCC, it described a functional 7nm SRAM test chip. This will use the now-standard FinFET transistor concept, but with some circuit techniques to make it work reliably and efficiently at the smaller size. Notably, TSMC says it will produce the first version of its 7nm chips using immersion lithography, rather than waiting for EUV like most of its competitors.


Recall that what each of the major manufacturers calls 7nm varies tremendously, so in terms of density, it’s possible that the TSMC 7nm process will be similar to Intel’s forthcoming 10nm process.


Samsung 7nm EUV


Samsung is also working on 7nm, and the company has made it clear that it plans to wait for EUV. At the show, Samsung talked about the advantages of EUV lithography as well as the progress it has made in using the technology.


3D NAND


Some of the more interesting announcements covered 512Gb 3D NAND flash, and showed just how quickly NAND flash density is growing.


WD 3D NAND Bit Density


Western Digital (which has acquired SanDisk) talked about a 512Gb 3D NAND flash device that it announced prior to the show, and explained how this device continues to increase the density of such chips.


WD 3D NAND Die Micrograph


This particular chip uses 64 layers of memory cells and three-bits-per-cell to reach 512Gb on a die that measures 132 square millimeters. It’s not quite as dense as the Micron/Intel 3D NAND design, which uses a different architecture with the peripheral circuitry under the array (CuA) to reach 768Gb on a 179 square millimeter die, but it’s a nice step forward. WD and Toshiba said it was able to improve reliability and to speed up read times by 20 percent and reach write throughput speeds of 55 Megabytes per second (MBps). This is in pilot production, and due to be in volume production in the second half of 2017.


Samsung 3D NAND Bit Scaling


Not to be outdone, Samsung showed off its new 64-layer 512Gb 3D NAND chip, one year after it showed a 48-layer 256Gb device. The company made a big point to demonstrate that while the areal density of 2D NAND flash grew 26 percent per year from 2011 to 2016, it has been able to increase the areal density of 3D NAND flash by 50 percent per year since introducing it three years ago.


Samsung 512 GB 3D NAND Architecture


Samsung’s 512Gb chip, which also uses three-bits-per-cell technology, has a die size of 128.5 square millimeters, making it slightly denser than the WD/Toshiba design, though not quite as good as the Micron/Intel design. Samsung spent much of its talk describing how using thinner layers has presented challenges and how it has created new techniques to address reliability and power challenges created by using these thinner layers. It said read time is 60 microseconds (149MBps sequential reads) and write throughput is 51MBps.


It’s clear all three of the big NAND flash camps are making good process, and the result should be denser and eventually less expensive memory from all of them.


New Connections


Intel EMIB


One of the topics I have found most interesting lately is the concept of an embedded multi-die interconnect bridge (EMIB), an alternative to other so-called 2.5D technologies that combine multiple die in a single chip package that is less expensive because it doesn’t require a silicon interposer or through-silicon vias. At the show, Intel talked about this when describing a 14nm 1GHz FPGA that will have a die size of 560mm2 surrounded by six 20nm die transceivers that are manufactured separately, even possibly on other technologies. (This is presumably the Stratix 10 SoC.) But it became more interesting later in the week, as Intel described how it would use this technique to create Xeon server chips at 7nm and the third generation of 10nm.


Processors at ISSCC


ISSCC saw a number of announcements about new processors, but rather than chip announcements, the focus was on the technology that goes into actually making the chips work as well as possible. I was interested to see new details for a number of highly anticipated chips.


AMD Comparison


I’m expecting the new Ryzen chips using AMD’s new ZEN architecture to ship shortly, and AMD gave a lot more technical details about the design of the Zen core and the various caches.


This is a 14nm FinFET chip based on a basic design consisting of a core complex with 4 cores, a 2MB level 2 cache, and 8MB of 16-way associative level 3 cache. The company says the base frequency for an 8-core, 16-thread version will be 3.4GHz or higher, and said the chip offers a greater than 40 percent improvement in instructions per cycle (IPC) than the previous AMD design.


The result is a new core that AMD claims is more efficient than Intel’s current 14nm design, though, of course, we’ll have to wait for final chips to see the real performance.


As described before, this will be available initially in desktop chips known as Summit Ridge and is slated to be out within weeks. A server version known as Naples is due out in the second quarter and an APU with integrated graphics primarily for laptops is due to appear later this year.


IBM Power9


IBM gave more detail on the Power9 chips it debuted at Hot Chips, designed for high-end servers, and now described as being “optimized for cognitive computing.” These are 14nm chips that will be available in versions for both scale out (with 24 cores that can handle 4 simultaneous threads) or scale up (with 12 cores that can handle 8 simultaneous threads.) The chips will support the CAPI (Coherent Accelerator Processor Interface) including CAPI 2.0 using PCIe Gen 4 links at 16 gigabits per second (Gbps); and OpenCAPI 3.0, designed to work at up to 25Gbps. In addition, it will work with NVLink 2.0 for connections to Nvidia’s GPU accelerators.


Mediatek SoC


MediaTek gave an overview of its forthcoming Helio X30, a 2.8GHz 10-core mobile processor, notable for being the company’s first to be produced on a 10nm process (presumably at TSMC).


This is interesting because it has three different core complexes: the first has two ARM Cortex-A73 cores running at 2.8GHz, designed to handle heavy-duty tasks quickly; the second has four 2.5GHz A53 cores, designed for most typical tasks; and the third has four 2.0GHz A35 cores, which are used when the phone is idle or for very light tasks. MediaTek says the low-power A53 cluster is 40 percent more power efficient than the high-power A73 cluster, and that the ultra-low-power A35 cluster is 44 percent more power efficient than the low-power cluster.


At the show, there were a lot of academic papers on topics like chips specially designed for machine learning. I’m sure we’ll see much more emphasis on this going forward, from GPUs to passively parallel processors designed to handle 8-bit computing, to neuromorphic chips and custom ASICs. It’s a nascent field, but one that is getting an amazing amount of attention right now.


Even further out, the biggest challenge may be moving to quantum computing, which is a whole different way of doing computing. While we are seeing more investments, it still seems a long way from becoming a mainstream technology.


In the meantime, though, we can look forward to a lot of cool new chips.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/351802/explore-the-highlights-of-the-solid-state-circuits-conferenc

Continue Reading

Data Center, New Initiatives Top Agenda at Intel’s Investor Day

Attending Intel’s Investor Day, what struck me the most was how Intel is changing from a company led by the PC client to becoming one that is much more diversified, and one that is increasingly being led by its Data Center business. This was best exemplified by the news that, in a few years, when the company is finally ready with its 7nm process, the first chips created via the process will be Xeon processors aimed at the data center. That’s a big break with tradition—for decades, Intel has brought its newest technology first to processors for clients—once desktops, now notebooks—with server products tending to follow a year or more later.


This is a big part of CEO Brian Krzanich’s plan to position Intel to address a much larger market than the traditional PC and server businesses, which together have a total addressable market of about $45 billion a year. Instead, he said, Intel is going after a much larger market, including the broader data center (covering networking and interconnects), non-volatile memories, mobile (through premium modems), and the Internet of Things—items that together represent a market with a $220 billion total addressable market for silicon by 2021.


All of these markets, he said, build on Intel’s traditional strengths in silicon and process technology. And they are all linked by a need for computing on larger amounts of data in the future, in a vision that sees data collected, moved to the cloud, used for large-scale data analytics, and then pushed back out; but with more computing needed on devices at the edge for real-time decisions as well.



As he has in a number of recent presentations, Krzanich explained that he sees the amount of data growing tremendously, noting that today the average person generates about 600MB of data each day, and forecasts that this will grow to 1.5GB by 2020. While today’s cloud is built mostly on data from people, he said, the cloud of tomorrow will be built mostly on machine data. The average autonomous vehicle produces 4TB of data a day, a plane 5TB, a smart factory a petabyte, and cloud video providers can push out as much as 750PB of video daily. Individual applications could produce even more he said, noting that the company’s “360 Replay” technology used during the Super Bowl and other sports events, consumes 2TB of data per minute. At Intel, “we are a Data Company,” Krzanich said.


I found it interesting that Krzanich said Intel’s top priority for the year is continued growth in the data center and adjacent technologies. This was followed by continuing to have a strong and healthy client business, growth in the Internet of Things business, and “flawless execution” in its memory and FPGA businesses.


Other speakers gave details about each of these markets, including some interesting technology and market trends, as well as financial projections.


10nm Technology and the PC Business


Murthy Renduchintala, who runs the company’s Client and Internet of Things Businesses and its Systems Architecture Group, began by talking about “trying to align process roadmaps with our product roadmaps,” and explained that as an integrated device manufacturer (IDM)—in other words, a company that not only designs semiconductor products but also manufactures them—Intel has several advantages.


Renduchintala compared Intel to an “artisan baker” who not only can make bread but can also work with farmers to decide which wheat germ to plant and where to plant it. This way, the product designers can look at transistor physics three years before a product is manufactured. For instance, he said, Intel used different flavors of transistors for CPUs and GPUs even within the same chip, a level of granularity that Renduchintala said fabless semiconductor companies would find difficult to achieve. (He joined Intel about a year ago, from Qualcomm, which like most other vendors in the industry uses foundries to do the actual manufacturing of its products.)


Renduchintala and Chip Density


Even though other companies are talking about producing chips on 10nm and even 7nm, Renduchintala said that Intel has a three year lead over the others. He noted that rather than focusing only on gate pitch, Intel focuses on the effective logic cell area, defined as cell width by cell height, which determines the overall area of the cell. He said Intel will maintain this lead even after competitors deliver 10nm later this year. Intel plans to release its first 10nm chips later this year as well—Krzanich showed a 2-in-1 laptop powered by a 10nm Cannon Lake processor at CES in January—and this will account for significant volume in 2018, he said.


The economic side of Moore’s law is alive and well despite rising wafer costs, Renduchintala said, noting that the company believes this will be true of the 7nm node as well. But he made a new emphasis on improvements within the process node, saying each of the three generations of 14nm technology thus far has produced 15 percent better performance using the Sysmark benchmark. He believes Intel can continue to do this on an annual cadence, with continued process improvements as well as design and implementation changes.


On the PC business, he noted that even though PC units have been falling, Intel’s profits in the segment grew significantly last year, mostly because of a focus on particular segments, such as PC gaming, where the company introduced a 10-core Broadwell-E platform with an average selling price of over $1,000; and by pushing platform technologies, such as LTE modems, Wi-Fi, WiGig, and Thunderbolt. He noted that the company has grown its mix of higher-end processors and hopes to continue that trend in 2017.


Looking forward, Renduchintala said the client group has made strategic bets on VR and on 5G modems. He noted Intel’s approach to 5G is very different from its approach to 4G, where it initially pushed WiMax, while the rest of the industry settled on LTE. He said Intel now knows it needs industry-wide standards and partners and cited a variety of companies Intel is working with on core networking, access network standards, and wireless radio standards. He said Intel is the only company that can provide 5G “end-to-end” solutions from the “cloudification of the RAN” (the radio access network) to the data center, and said it plans to be shipping samples of its first 5G global modem by the end of the year—using Intel’s 14nm technology—and plans to ship these in the millions in 2018.


Data Center Grows Beyond Traditional Server


Diane Bryant, who runs the company’s Data Center Group, focused on how enterprises are going through a period of transition, driven by the move to cloud computing, network transformation, and the growth of data analytics.


One big change for her group going forward is that it will be the first to launch on the next generation process node, meaning that Xeon products will be Intel’s first 7nm processors. In addition, she said, the data center products would also be the first on the “third wave” of 10nm products. (The first wave of 10nm, for mobile products, is due out at the end of this year, so the first 10nm servers won’t be out until next year at the earliest. Intel hasn’t yet confirmed an exact date for its 7nm process, but it seems likely that it would be in 2020 or 2021.)


A few different factors will make this change possible, Bryant said. First, the Data Center now has enough volume, as it takes a significant number of wafers to bring up a new process. But just as important is Intel’s new use of a packaging solution called EMIB (for Embedded Multi-die Interconnect Bridge), which lets the company cut up a Xeon die into four pieces, each of which can be debugged independently, and then connected via this 2.5D package, so it functions as a single chip. (The new package was actually first announced in 2014, but the company gave more details at this week’s ISSCC conference, and this looks like its first major use.) Until now, a server die was just too big to be used for first production, but by cutting it into pieces, you get a number of smaller die, which are usable.


Bryant and Enterprise Transformation


Bryant noted how Intel’s overall data center business grew 8 percent last year, but enterprise and government sales were actually down 3 percent, while cloud server provider sales were up 24 percent and communications service providers were up 19 percent. Enterprise sales accounted for 49 percent of the business last year, the first time this business was less than half of the group’s sales.


Bryant said that enterprises continue to need more compute—growing at 50 percent per year—but said that some workloads are quickly moving to the cloud, while others are mostly staying on premises. For instance, she said, collaboration workloads grew 15 percent in the cloud last year, but actually shrank 21 percent on-premises. On the other hand, she said, high-performance simulation and modeling require extremely low latency, so it is almost entirely run on-premises. Overall, 65 percent of workflows are now run on-premises, a figure she expects to level out at about 50 percent by 2021.


Bryant and AI Workloads


Broadly defined, artificial intelligence applications account for about 7 percent of today’s servers, Bryant said, with the majority running classical machine learning algorithms in applications such as recommendation engines, stock trading, and detecting credit card fraud. But, she said, deep learning—the neural-network approach used in the prominent image recognition and voice processing applications—accounts for 40 percent. In this area, Bryant talked about how GPGPU instances have gotten a lot of attention, but that overall these still impact only a small percentage of the overall server market: 20,000—30,000 servers out of 9.5 million.


Bryant noted Intel’s intention to serve all parts of the AI market with a series of processors, including the next-generation traditional Xeon servers; packages that combine Xeon with the firm’s FPGAs (through its Altera acquisition); Xeon Phi (with many smaller cores in a new version called Knights Mill that allows lower-precision calculations); and Lake Crest, which includes a chip specifically designed for neural networks, a result of the acquisition of Nervana. The Nervana name is being used to describe the whole line.


Another change is Intel’s increased focus on what it calls “adjacencies”—products that surround the server, including its OmniPath interconnect used in the high performance computing market; silicon photonics, including an on-chip laser providing 100Gbps now, with 400Gbps on the roadmap; 3D XPoint memory DIMMs; and its Rack Scale Design proposal for denser, more energy-efficient server racks. Bryant talked about the increasing importance of the networking market, where Intel is working to convert communication service providers from ARM and custom processors to the Intel architecture, as part of a move to SDN and Network Functions Virtualization. She said she expects 5G to be an “accelerant” in that effort. Bryant also said Intel is now the leader in network silicon (counting both its data center products and the Altera FPGAs, although the slide she showed indicated it is still a highly fragmented market).


3D NAND and 3D XPoint Memory


Rob Crooke, who runs the company’s non-volatile memory group, talked about why now is “a great time to be the memory guy at Intel,” and addressed the company’s plans for both 3D XPoint and 3D NAND flash memory.


I was a bit surprised to hear relatively little on the Optane drives, which Intel is preparing using the 3D XPoint technology. These drives are arriving a bit later than originally expected, but Crooke said that they have begun shipping the first units to data centers, and said the company has a clear path for three generations of this technology. He seemed to be positioning them more as eating into the market for high-performance memory (DRAM) than for the traditional SSD storage market, at least initially, but in the long run, both Crooke and Krzanich sounded very optimistic on Optane, and not only in the data center, but in enthusiast PCs as well, with Krzanich saying that “every single gamer” will want Optane in his or her system.


Crooke said this would be “an investment year” for Optane, with the company expecting such drives to account for less than 5 percent of total storage revenue.


Crooke and 3D NAND Technology


Crooke was extremely enthusiastic when talking about the firm’s plans in 3D NAND. He explained that he thinks Intel has a competitive advantage with its 3D NAND products because its design—created in conjunction with manufacturing partner Micron—offers higher areal density and a better cost than its competitors. Intel currently ships a 32-layer 3D NAND product, but Crooke said it is on track to deliver a 64-layer product for revenue in the third quarter, only five quarters after the 32-layer version shipped; the company is on track to ship 90 percent 3D NAND by the end of the year, he said. Crooke also talked about how Intel is currently producing this at a joint venture fab with Micron in Singapore; how Intel is ramping a big factory in China on its own; and how Intel will work with Micron on another factory.


Crooke and 32 TB of 3D NAND


To illustrate how fast density is improving with this technology, Crooke first held up a 1 TB hard drive, and then showed how the first generation 1 TB SSD was a bit smaller. Then he held up the 1 TB module currently shipping, which looks to be about the size of a stick of gum, and then showed the module Intel will be shipping later in the year, a single thumbnail-sized package. To illustrate how this will impact the density of a data center, he held up a thin 32 TB module designed for a server and said that using this module you could now get 1 petabyte in a thin 1U server, instead of a full rack server, which would be required with hard drives.


Internet of Things & ADAS


Davis and IoT Markets


Doug Davis, who has been running the firm’s Internet of Things group and is now focusing on the advanced driver assistance systems (ADAS) group, talked about both of those areas.


On IoT, he said Intel’s interest is primarily in the value that data has when moving through the network to the cloud, and the application of data analytics, as well as analytics on the edge. He said the difference between IoT and earlier embedded systems is primarily about connectivity and using open platforms. Davis cited a Gartner study that said there were 6.4 billion connected things at the end of last year, an increase of 30% over 2015.


In particular, Davis focused on the retail, transportation, industrial/energy, and video markets, including network video recorders and data analytics moving to cameras and video gateways.


Davis’s biggest focus was on autonomous driving, which he said would be the most visible AI application in the next 5 to 10 years. He talked about how this will require connections back to the cloud and said that while today’s cars use $100 to $200 of silicon (much of this for the infotainment system), by 2025 the silicon bill of materials may increase to 10-15 times that number. Davis said Intel is involved in a number of autonomous vehicle tests, including a 5G trial platform, and has a partnership with BMW and Mobileye for the next generation of such vehicles.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/351692/data-center-new-initiatives-top-agenda-at-intels-investor

Continue Reading

The Top SaaS Vendors, and Why Consolidation May Be Harder Than It Looks

There’s little doubt that more and more applications are moving from on-premises solutions to Software-as-a-Service; that’s been going on to some extent for at least 18 years, since the early days of Salesforce (or even earlier, if you want to count the payroll processing services from firms such as ADP.) In recent years, this trend has picked up a lot of momentum.


After hearing Oracle co-CEO Mark Hurd suggest that by 2025 two companies would account for 80 percent of all SaaS revenues, I decided it would be interesting to see where the market is now, and just how consolidated it is. It turns out that it’s actually pretty hard to estimate just how large SaaS revenues are and to compare the different types of companies. After all, some companies, such as Salesforce and Workday, are “cloud-native,” and only offer cloud solutions. But the biggest software vendors have also purchased more SaaS solutions. For example, Oracle acquired NetSuite, and SAP bought ConcurFor more on these, see my last post)


I also left out a number of security and networking vendors, since these aren’t really general productivity applications, as well as a few obvious vertical market solution providers (such as athenahealth and FIS), as they aren’t general SaaS providers.


Then there are the vendors who just don’t give enough detail to make their SaaS revenues at all clear. Amazon WorkSpaces, for example, is probably a rounding error in comparison with the company’s long list of infrastructure and platform services. Similarly, G Suite belongs here somewhere but Google doesn’t break it out, and it is certainly a small part of the company’s overall revenues. The same thing is true for some of the larger, more diversified technology companies: Dell Technologies offers a number of SaaS products, such as Spanning and Boomi, but doesn’t break out the numbers, and this is probably a small percentage of revenue. The same is true for Cisco.


Of course, the biggest issues come up for companies where SaaS is a significant percentage of revenues, but where the definitions are not clear. It’s tough to break out cloud revenues among companies that are more diversified and offer both SaaS and on-premises software, maybe even some hardware. So I’ll admit that these are just guesses, and would love any comments that would help to make them more accurate. Here’s the list, but pay attention to the notes below.



1) Microsoft said its “commercial cloud revenue” run rate grew to more than $14 billion, suggesting quarterly revenues of about $3.5 billion, which would be split among its Azure (IaaS and PaaS) offerings and its Office 365 and Dynamics 365 SaaS services. It’s total “productivity and business processes” group, which would include these products as well as traditional Office and on-premises offerings, did $7.4 billion in revenue. I’m going to guess that Office 365 and similar products are a bit bigger than Azure, so let’s say $2 billion.


2) For ADP, unlike most of the companies on the list, it’s mostly a question of what is software and what is a service. The company said it did $2.3 billion in revenues of “employer services”—essentially human capital management and HR services, including payroll. Some people would call this SaaS; others wouldn’t. Since it competes with companies like Workday and Ultimate Software, I’m including it. If we call half of it SaaS, that’s $1.15 billion, landing it near the top of the list.


3) Adobe reported a run rate of $4.01 billion for its “digital media annualized recurring revenue” including its Creative Cloud and Document Cloud products. Turning that into a quarterly number would make it about $1 billion. Much of this is client software delivered in a cloud model (just like with Office 365), so as with ADP, I’m counting half of that, or $500 million, and then adding the $465 million from its Marketing Cloud product.


4) Intuit is an interesting case, in that its business is highly seasonal, since its tax preparation and electronic filing software is used much more in the early part of the year. The consumer part of that business is mostly online, accounting for 90 percent of the company’s TurboTax users in its big quarter, and virtually all of the users in the current, smaller quarter. In its last reported quarter, the consumer tax business accounted for $42 million in revenue, but in the previous quarter, it was $1.6 billion. Meanwhile, QuickBooks Online and related products accounted for $179 million. So for the most recent quarter, the SaaS number would be roughly $221 million (not counting desktop or enterprise versions of QuickBooks, other small business products, or the professional tax business). However, that’s not representative of the year as a whole. I took the full year’s consumer tax business (just shy of $2 billion), took 90 percent of that, divided by 4 to get a “typical quarter,” and added in the QuickBooks online number, which gives me $662 million. This seems more representative, arguably.


5) IBM doesn’t distinguish among the different kinds of cloud revenue it earns and calls some things cloud that I wouldn’t, but I’m listing it with $600 million, based on reported cloud revenues for its cognitive services group, which includes Watson and other analytics. Based on most definitions, that’s probably high, but it’s the best I could find.


6) Oracle reported $878 million in combined SaaS and PaaS services, meaning a combination of its cloud-based applications, such as HR and CRM as well as database and similar services. For much of Oracle’s business—specifically, the apps that make up its E-Business Suite—customers need to be running both the application and the database platform it runs on. I’m taking half of the revenues, which would result in $439 million in quarterly revenue. (Note that NetSuite, which Oracle acquired, had $230 million in revenue in the second quarter).


7) Dropbox is a private company, but its CEO recently reported it was on a $1 billion run rate, so I’m taking this as $250 million in quarterly revenue.


That’s the best I’ve been able to come up with, though I know it’s far from perfect. I’m sure I didn’t make all the right decisions, so I’d love any feedback on how to improve this list.


One thing that does stand out: of the top 20 vendors I found, the top two account for 40 percent of the revenue, a pretty strong percentage, but still a long way from the 80 percent concentration that Oracle predicted. However, if you look at total revenues and exclude IBM and HP (where applications are a very small part of the revenue), the two large vendors—Microsoft and Oracle—account for 64 percent of the total revenues. (Of course, these two also offer many things beyond applications software.) If you assume the bulk of applications revenues will convert to SaaS over the next few years, that may be a better predictor of how the revenues may break out. Also, recall that I’m excluding Amazon and Google, either of which could be considered part of this chart.


In other words, it seems quite possible that we’ll see significant consolidation in the field, either through the big companies growing their percentage of SaaS revenues or through acquisitions. However, getting to 80 percent seems like a tall order. Stranger things have happened, but it doesn’t look likely to me.


Again, I know I’m making a bunch of assumptions in creating this chart and would love to see more accurate estimates of SaaS revenue for the more diversified companies. I’m open to suggestions.




Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/351605/the-top-saas-vendors-and-why-consolidation-may-be-harder-th

Continue Reading

Techonomy and the Economy: Is Change Happening Faster than Society Can Absorb it?

To me, the most interesting topic at last week’s Techonomy 2016 conference was the impact that technology and data are having on the economy as a whole. As the conference immediately followed the election, it was a topic that came up in a variety of sessions—with a surprising number of comments about how changing technology has made many people uneasy, and how that may be hurting the economy and affecting how people vote.



“Change is happening at a much faster pace than society can absorb the changes,” Tony Scott, the Federal CIO of the United States, said in the opening panel, noting that changes in technology, energy, and other areas are fundamentally changing where jobs are and how people live. Still, he said, “relentless digitalization” is inevitable.


Simulmedia CEO Dave Morgan noted that job loss to technology will only intensify, as 1.5 million driving jobs—the largest single job category for white men outside of the government—will disappear over the next 4-5 years. (I believe he is wildly overestimating the pace of change here, but we’ll see.) Morgan stressed that, though economic issues are important, dignity is also important; in the small city in Pennsylvania where he grew up, people not only used to have jobs, they felt good about them.


Morgan referenced a 1946 book by Peter Drucker, Concept of the Corporation, which lamented the growing use of cost accounting, and argued that the relationship between labor and management had changed. In the 1950s, Morgan said businesses paid a living wage, offered health plans to cope with catastrophic incidents, and offered pensions, so workers participated in the growth of a company. Over time, pensions have disappeared, fewer companies offer health insurance, and wages are now considered costs.


Blackberry CEO John Chen said the bicoastal tech industry has largely missed the concept of jobs, and this has led to some of the anger directed toward the industry. Chen said he supports infrastructure investment and stressed the importance of cybersecurity.


Scott agreed that some paradigms need to be reexamined. He noted that we have an assumption that everything should interoperate with everything else, but in the near future, we may need to ask whether the system you might connect to is safe and performing the way it should.


Scott said that the government is on an unstoppable track to digitization that should improve interaction with the citizenry. For instance, he said that today’s technology pretty much follows the org chart, so you need to understand the organizational structure to locate a site for the information you’re after. This, he said, will change no matter who is president.


Similarly, Scott said the federal government spends $85 billion a year on technology, with more than 80 percent of this to simply “keep the lights on.” We are now “air-bagging and bubble-wrapping old stuff” for cybersecurity, but said that we need to upgrade and replace systems in order to get to a more modern platform. Scott mentioned that there was a bipartisan bill to create an Information Technology Monetization Fund to hasten IT advancements and upgrades at the federal level.


There were a number of good questions and comments from the audience. Gary Rieschel of Qiming Venture Partners, who spoke in an earlier session, said there is a perception among Trump and Sanders supporters that “America is no longer fair.” Where you live and how much money you have determines your quality of education and access to healthcare, Rieschel suggested, and while technology may help, it can only do so if it comes from the citizens up, and not from the top down. Rieschel pointed out that, until the 1970s, unions had large apprenticeship programs, but since then the skills of workers have eroded as older workers retired and younger workers weren’t retrained.


Roger Pilc of Pitney Bowes talked about how technology has helped to democratize international trade. He quoted Alibaba’s Jack Ma as saying that over the last twenty years this has mostly helped large businesses, but that over the next twenty it may help medium and small businesses. Pilc pushed things like shipping and logistics, citing cloud technologies, APIs, mobile, and IoT as items that can help smaller firms, and noted that most job creation comes from small and medium-sized businesses.


Others in the audience talked about how technology may not be the answer; how U.S. companies could build call centers and even coding centers in Middle America; and education. I noted a comment that the technology industry should not be surprised by the anger in the country, as many groups—especially women and minorities—are also angry at how they have been treated by tech.


The Economic Impact of Data Convergence


Annunziata, Farrell, Kirkpatrick


I was quite interested in a conversation on the economic impact of data convergence, which featured GE’s Chief Economist Marco Annunziata and Diana Farrell, Founding President and CEO of the JP Morgan Chase Institute and a former Deputy Director of the National Economic Council.


David Kirkpatrick, who moderated the discussion, said that data shows that life is improving in almost every major country. But Annunziata said that in most cases, the narrative is more powerful than the data. He said there is a lot of hype around data, but that the impact of data on the economy has been small. Going forward, however, Annunziata talked about using data to generate value.


Farrell said that one big problem is that while the overall economy has strengthened, the level of anxiety remains high. She said take-home pay has been particularly volatile, with 55 percent of Americans seeing a swing of income of over 30 percent month-to-month over the course of a year. Farrell said a fear of “the liquidity trap”—a concern of running out of liquid money—is true for almost all Americans.


Farrell said that the “gig economy” employs about 1 percent of adults in a given month, and only 4 percent of adults over last three years. These are primarily young and disproportionately low-income workers, who mostly view such work as supplemental income, used to offset volatility but not as a replacement for a job.


In a discussion of how people view data, Ford Motor Co. VP of Research and Advanced Engineering Ken Washington said that even though the government has lots of data on people, it is all in silos, and thus it is incredibly difficult to obtain holistic information on an individual. Washington said there were few ways for either the government or commercial companies to pull this information together, and said people are frustrated that the data is out there but not improving their lives.


Annunziata agreed, and said it seemed strange that the government “knows all this information about me, but treats me as a stranger when I go to the airport.” Annunziata worries about things like data sovereignty laws in Europe. He said ringing a fence around data doesn’t make it secure, and that by preventing data from being aggregated could negate the value of the data.


On the question of government use of data, I was interested in a separate discussion with Marina Kaljurand, former Minister of Foreign Affairs for the Republic of Estonia. She talked about how her country had created an “e-lifestyle” that started with government digital systems used to pay taxes, to vote, and to receive report cards. This was based on digital signatures using two-factor authentication and the goal of having a “paperless” approach to government. I think that’s an interesting goal, but one that seems hard to reach in a country as diverse as the U.S., where individual states have their own policies and rules.


Overall, I wonder if Silicon Valley overestimates its direct impact on the economy, but underestimates the secondary impacts of the new technologies it creates.

http://www.pcmag.com/article/349600/techonomy-and-the-economy-is-change-happening-faster-than-s

Continue Reading