Why 5G Isn’t Aimed at Mobile Phone Users

After 2G networks gave way to 3G, 3G networks gave way to 4G, and so you might expect that 5G networks will replace 4G LTE. But that’s not likely to happen, or at least not anytime soon, according to a number of speakers at the Brooklyn 5G Summit at the NYU Tandon School of Engineering last week.


Instead, the speakers I talked to explained that 4G is expected to form the backbone or anchor for networks going well into the future, and will likely continue to be the way most of us get our mobile data, with 5G being used to supplement 4G and provide data for other applications.


“5G is the first G that won’t replace the one before,” said Tod Sizer, who heads mobile radio research for Bell Labs. Sizer said current 4G LTE solutions perform very well for voice, web surfing, and even video, but that new 5G networks will provide more flexibility for applications such as controlling machines on a factory floor, improved reliability compared to Wi-Fi, and better latency than LTE. But he said such networks will likely leverage both 4G and Wi-Fi and add new capabilities on top.


The big difference at this year’s summit, compared to earlier meetings, is that the industry is now “moving from how to what,” Sizer said, as the specifications for 5G get more settled, even though the applications aren’t as clear.



Ken Budka, of Bell Labs Consulting, talked about how 5G will enable “the next industrial revolution,” which will be brought about by combining digital interfaces and sensors on devices and machines with advances in machine learning and AI, all on the future network.


The key to this, Budka said, is the transformative power of low-latency services. Some applications require high bandwidth, and others require low-latency, but most don’t need both with the exception of VR and AR.



Budka noted that in general, 5G will aim to provide one to three milliseconds of latency, compared to about 100 milliseconds in 4G. This is important for a number of applications. For instance, Budka said that in a car moving at 100 kilometers per hour, the difference in latency between 1 ms and 100 ms is half a car length. He also listed other applications, including industrial robotics, 3D printing in construction, cooperative robot and drone control, and teleoperation, in which humans operate devices from a remote location.


From a design standpoint, he talked about a traditional core network working fine for 4G and most applications, but that an “edge cloud” would be needed for applications such as VR/AR and system control, where latency is crucial.


5G New Radio



Durga Malladi, Senior VP of Engineering at Qualcomm Technologies, talked about 5G as a “unifying connectivity fabric” serving three broad buckets of applications with very different needs. Enhanced mobile broadband requires enhanced speeds for some applications and the ability to handle more people using more data every year. For a massive Internet of Things network, most of the devices use very little data, but constant low power consumption is key, as these devices need to work for years on small batteries. And you will also have mission-critical services that don’t necessarily use a lot of data but require very low latency.


Malladi talked about continuing to use 4G as a base to provide the core mobile broadband connections, and using the 5G New Radio standard for spectrum above and below what is typically used for 4G to help in basic cases but also to provide for new applications. This includes sub-6GHz bands to provide more mobile coverage and mmWave bands such as the 28- and 39GHz bands, which provide high-bandwidth over short distances, so they are being used in initial fixed deployments, but may also be used in mobile deployments.



The key technologies in 5G New Radio include Massive MIMO (which involves using multiple antennas), robust mmWave support, advanced channel coding and DFT-spread, and OFDM-based waveforms for both uploads and downloads. Both stand-alone and non-standalone versions of 5G standards are on track to be approved next year (as part of the 3GPP R15 specification), with the first deployments expected in 2019.


Much of the conversation at the summit was about the 5G New Radio standard, with many discussions about the specifics of many of the technologies, including details on multiple MIMO waveforms.



Mikael Höök, Director of Radio Research for Ericsson Research, talked about how the new radio is necessary for new use cases, including broadband and media everywhere, sensor networks, smart vehicles, control of service, and remote devices. He said standardization is in full speed on the new timeline, and that it would have “future-proof” hooks to allow for future evolution.


Höök said the principles behind the design of the new radio include spectrum flexibility, low-latency, a beam-centric design, connectivity, and interworking across spectrum bands.


Many of the technical details Höök described were interesting, particularly as to how the system is designed to enable different use cases. For instance, it includes bandwidth adaptation so it can listen using a narrow bandwidth, but switch to a wide bandwidth when receiving large amounts of data, thus saving power. Other techniques are used to provide low latency and what he called URLLC (“Ultra-reliable and low latency communications”).


In other sessions on the new radio, Huawei Fellow Peiying Zhu gave a talk on waveform, numerology, and channel coding. She talked about many of the changes necessary for the new radio, along with the need for LTE co-existence. Later, Zhu joined with Höök and others in a panel discussion. Höök said that there are use cases and deployments of the new radio, so we know the technology works, but that the uptake of new services is still in question.


Docomo Keynote


Seizo Onoe, CTO and EVP of NTT DOCOMO, gave a generally positive speech about 5G, but may have been the most realistic about the challenges it faces.


Onoe said the economics of 5G are the “elephant in the room,” as the technology will require the use of many small cells, and will force intense backhaul and backbone modernization, and thus substantial capital expense. But, he said, 5G’s efficiency offers the promise of increased data capacity without increasing capital expense – and in particular cited the use of Massive MIMO and other technologies as opposed to just thinking of the service as a “hotspot service” for complementary use.


Onoe said he is not worried about fragmentation from early versions and said front runners should take responsibility for compatibility. DOCOMO plans a 2020 rollout, he added, which is enough time to set standards.


However, Onoe gave two “dark premonitions.” In wireless technology, the previous generation often booms just before the launch of the next, as happened with enhanced 3G (HSPA+) before the 4G LTE launch. This could happen this time around, he said. Additionally, the industry has historically seen great success only with even-numbered generations, and Onoe wonders if the industry will need to wait for 6G to get everything it wants.


Still, Onoe concluded on a positive note and said that while there are lots of myths about 5G, he believes that the industry should “get on the 5G bandwagon” and create new business models through collaboration across industries.


Progress toward 5G Deployment


Dave Wolter, AVP for Radio Technology and RAN Architecture at AT&T, talked about how the company wants to speed up deployment while still respecting standards, and discussed a variety of tests the company has conducted.



Wolter noted that the December deadline for non-standalone 5G-NR (the standard that uses LTE as an anchor) is crucial because it involves all of the hardware-impacting parts and is necessary so silicon vendors can start designing chips. The idea is to ensure compatibility with the standalone version when that is completed in June 2018. Wolter said AT&T hopes for a standards-based NR deployment of 5G New Radio (NR) as early as December 2018, most likely with fixed wireless first, but with mobility to follow soon after. Still, there remain a number of details left to be decided, he noted.


Wolter described how this would evolve over time, with fixed wireless likely first, then an upgrade to the next generation core, and later to widespread 5G deployment over time. He said AT&T will prioritize a non-standalone mode, and while it has an interest in standalone 5G, that effort will take time. He did note that there is no sub 6GHZ spectrum available for 5G in the U.S. except for around 3.5 GHz, which has other issues. (It is currently used for Defense Department and fixed satellite services, which could theoretically share the spectrum.)


In the meantime, AT&T is most interested in the 39GHz frequency and has done a number of both fixed and mobile tests at 28GHz and 39GHz in partnership with Ericsson and Intel, using techniques such as automatic beam tracking and massive MIMO.


I had the chance to ask Wolter about when he thought we’d see 5G support in handsets, and he said that might have to wait for 3GPP’s Release 16 (as opposed to the Release 15 standards now in development). But he said the big advantage of 5G for mobile broadband is its high density, which provides a better experience when more people are watching video simultaneously, as well as making mobile augmented reality and virtual reality work much better.


In a panel discussion, it was clear that different operators across the world have different visions. YongGyoo Lee, SVP for Korea Telecom, talked about having specifications ready so it could offer some 5G services for the 2018 Winter Olympics (though it appears that much of this will be pre-standard technology). Similarly, DOCOMO’s Onoe talked about having a system running for the 2020 Summer Olympics.


However, Frank Seiser, a VP for Technology Innovation at Deutsche Telekom AG, said he saw no driver for standalone 5G as currently proposed, and added that European operators have neither the fixed wireless spectrum the U.S. operators want to use nor the impetus of the Olympics that is driving the Japanese and Korean operators.



In slides presented later at the summit, Seiser pushed the idea of a cloud-native architecture with much more network function virtualization and a service-based architecture but worries that the current Release 15 standards under development don’t provide for these features. Seiser said that a joint industry push for a much more innovative 5G system architecture is needed in order to deliver on the 5G vision. For now, Seiser said, he sees little that you can do better in 5G versus LTE, and that it may be 6G before we solve that.


In the meantime, almost everyone liked the idea of using non-standalone 5G, and all indicated it would be a long time before any of the current LTE spectrum will be used for 5G.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/353376/why-5g-isnt-aimed-at-mobile-phone-users

Continue Reading

Testing AMD Ryzen and Intel Kaby Lake For Business Use

The most interesting processor announcement of the year has been AMD’s Ryzen desktop processors, based on the company’s new Zen architecture. I’ve been looking forward to AMD getting more competitive in this market for some time, and now that I’ve had the chance to run some real benchmarks, I’ve found some interesting differences, with AMD looking great in some tests, but lagging in other areas.


AMD has historically been Intel’s main competitor in desktop and laptop chips, but has lagged appreciably over the past few years, so much so that it hasn’t really been worth trying to compare the two. But Ryzen is much more competitiveindeed, the top-of-the-line Ryzen 1800X, based on the company’s Summit Ridge platform, offers eight cores and 16 threads, with a nominal clock speed of 3.6 GHz and turbo speeds of up to 4.0 GHz. It’s manufactured on GlobalFoundries’ 14nm process, sells for $499, and AMD has mostly been comparing it to Intel’s Core i7-6900K (Broadwell-E), which has a similar number of threads at more than twice the cost.


Over the past several weeks, I’ve seen a lot of benchmarks comparing the two Ryzen 7 eight-core chips to the 6900K. The fastest of the newest Intel chips, the 4-core, 8-thread Core i7-7700K (based on the Kaby Lake platform), has a nominal clock speed of 4.2 GHz and a turbo speed of 4.5 GHz, and a list price of $350.


This includes reviews from sites such as Anandtech, Tech Report, and our sister publications ExtremeTech and Computer Shopper.


Most of these reviews have focused on general purpose applications, and in general, Ryzen looks pretty good; in gaming, Ryzen does well on 4K tests but somehow seems to lag a bit in a number of 1080p benchmarks.


But my primary interest is business computing, and high-end business applications in particular. I understand why AMD would want to compare Ryzen 7 to Broadwell-E, since AMD gives you the same number of cores for less money, but I don’t see much for Broadwell-E in business (although I suppose it could have an application in things like video encoding). Broadwell-E has mainly been pushed for very high-end enthusiast and professional gaming desktops, and is an older part that’s likely to be replaced soon. Instead, I wanted to look at the latest and greatest from each company, so I focused on comparing the Ryzen 7 1800X to the Kaby Lake Core i7-7700K.


I thought this would be particularly interesting because AMD’s Ryzen 7 has more cores and threads (8/16 compared to 4/8 for the Core i7-7700K), but the Core processor has a faster clock (4.2 – 4.5 GHz vs. Ryzen’s 3.6- 4.0 GHz). Note though that there are other differences, including (notably) that the current Ryzen chip supports only 128-bit wide AVX (SIMD) instructions, versus 256-bit support on Kaby Lake.


(All tests were run in systems with top-end MSI Xpower Gaming motherboards, 16 GB of Corsair Vengeance DDR4 memory, a 240 GB Kingston Digital SSD V300 SATA 3 SSD, and an eVGA Nvidia GeForce GTX 1080 graphics board.)


General Business Tests



CPU-Z sheds light on the raw horsepower of the systems, but not specifically on business performance. Here Ryzen 7 has a clear lead, even on the single-threaded test, which shows the company has made significant progress with its Zen core design. But it really shines on the multi-threaded test – reflecting its 16 threads compared to the Core i7’s eight.


PCMark 8


We tested the complete version of this benchmark, which runs a series of scenarios in common business applications. Kaby Lake wins herein both the conventional and OpenCL-accelerated versions of the test, but Ryzen looks quite good. In the real world, I’m not sure you’d notice much difference becauselet’s face itmost typically are now fast enough on just about any machine on the market.


Encryption


While TrueCrypt isn’t used as much as it once was, it remains an interesting benchmark for encryption. Both chips support AES encryption natively, and simply having more cores made the Ryzen shine on this test.


File Compression


7-Zip is a popular compression/decompression program for Zip files. Here the results are very interesting, with Kaby Lake much faster at compression and Ryzen much faster at decompression. Most of us decompress files a lot more often that we compress them, so this is probably a good tradeoff for AMD.


Overall, for typical business use, you’d be quite happy with either choice.


Scientific Computing


STARS Euler


For scientific computing, we used the Stars Euler 3D computational fluid dynamics test. This seems to be very much dependent on memory bandwidth as well as core count, and here the Kaby Lake processor does a bit better, but not much. Other testers suggest the Broadwell-E would really be much faster on this test.


Y-Cruncher


Another test that may be applicable for scientific computing is Y-Cruncher, a program that can compute PI to an arbitrary number of digits. It has been optimized for many different processors, including a recent optimization for AMD’s Zen architecture.


We tested for computing Pi to 2.5 billion digits, and found it took Ryzen 303 seconds of computation time using the Zen optimization (compared to 337 seconds without), vs. 280 seconds for Kaby Lake. Kaby Lake was significantly faster, likely because of the superior AVX2 support in the Intel processor.


In general, scientific computing is probably a case where spending more for the biggest processor you can find makes sense. Kaby Lake beat Ryzen here, but the real choice would probably be the Broadwell-E, or even a 12-core, 24-thread Xeon-E5 2600W v4 processor.


Graphics and Video


CINEBENCH R15


Based on Maxon’s Cinema 4D software, Cinebench has become a standard benchmark for 3D animation, and AMD really pushed the multi-threaded version of this test during its introduction of the Ryzen 7 processor. The CPU test renders a scene using just the CPU cores, and while the Kaby Lake was faster on a single-threaded run, having more cores clearly gave Ryzen a big benefit in the multi-threaded run. Interestingly, on the OpenGL test, which is supposed to mostly test the GPU, the Kaby Lake system was able to render scenes much more quickly, which is a scenario that more reflects real world use.


HEVC Test


Here we took a high-quality 10-minute 4K video encoded in H.264 MPEG4 at 50 frames per second and converted it into a 1080p H.265 HEVC video at 30 frames per second using Handbrake and the X.265 open-source encoder. This test seemed to scale very nicely with all 16 threads at 100% the entire time, and as a result, the Ryzen 7 significantly outpaced the Kaby Lake.


Compilation


C++


Most mid-sized organizations and enterprises have developers who spend a lot of their time building, updating, and integrating corporate applications. For developers, we used the Visual C++ 2015 to compile the LLVM compiler and tools and Clang front-end. (Yes, we are compiling a compiler.) This seems to use a mix of serial and parallel code, and Kaby Lake’s performance was notably better.


Financial Applications


Finally, we get to the kind of applications that matter most to me: those that deal with the simulation of large financial applications.


Matlab


I started with a portfolio simulation application in Matlab, a numerical computing environment that has been widely used in financial firms for creating complex models. In this test, the Ryzen 7 came out slightly faster, probably as a result of the additional cores.


I hadn’t run Matlab on high-end desktops in a while, but both did notably better than an overclocked (3.9 GHz) Core i7-4770K (Haswell) I tested a few years ago, which completed the test in 36 minutes.


Excel


I next turned to Excel, and started with a new, larger version of a basic Monte Carlo simulation I’ve been running for a long time (previous versions of the test are now too short). I had thought that Ryzen 7 would do better on this test, because it seems to fully saturate all the threads, but in fact it was the Core i7 that was notably better on this test.


I also tried a test I’ve been running on many generations of desktop processors, involving a very large data table. Here again I had a much better score from the Intel system: the Kaby Lake took 46 minutes compared to the Ryzen’s 59 minutes, and that’s the kind of difference you would really notice in the real world.


One interesting thing I observed was that on the Intel system, while it mostly used one thread, it would occasionally spin off tasks on other threads, while on the AMD system, it exclusively used a single thread (which of course works against the Ryzen). It’s unclear to me whether this is related to the processor, or whether there is something in Excel 2016 that schedules tasks more efficiently with the Intel processor.


I can’t even say I was particularly impressed by the Intel system on this test, though. It actually came in slightly slower than the overclocked Ivy Bridge and Haswell systems from 3-4 years ago, despite the same number of cores and a higher clock rate. (With the Haswell systems, I did the tests with Excel 2013 and Windows 8; this time I’m using Excel 2016 and Windows 10, so that may have some effect.) Back then, Intel systems were almost twice the speed of AMD versions on this test. Zen shows AMD has made great strides since then, while Intel’s results do not indicate the same.


Conclusion


Overall, the results are mixed. In some cases, such as True Crypt encryption and HEVC encoding, Ryzen was clearly faster, which is probably a reflection of the additional threads. In other cases – such as for scientific computing (tested in the Stars Euler test and Y-Cruncher) and Excel, Kaby Lake did much better, which may be attributed to the higher clock speed and 256-bit AVX support. Either would work well for most business cases.


That itself is a big win for AMD. It’s been a long time since the company has had a competitive desktop product for demanding business users, and Ryzen certainly fills that need. While I still expect Intel to dominate the corporate desktop marketin part because of the inherent conservatism of those buyersit’s great to have another choice.

http://www.pcmag.com/article/353097/testing-amd-ryzen-and-intel-kaby-lake-for-business-use

Continue Reading

Intel Sees Expanding Role for FPGAs, Heterogenous Computing

Much of the interesting processor discussion has lately revolved around using different kinds of chips and cores, as opposed to the general-purpose computing cores common in conventional CPUs. We’ve seen all sorts of different combinations of chips used for particular computing tasks, including CPUs, GPUs, DSPs, custom ASICS, and field-programmable gate arrays (FPGs), and increasingly we’re seeing applications that combine aspects of all of these, sometimes in a system and sometimes within a single chip.


Even Intel—long the proponent of general-purpose compute cores that doubled in speed every couple of years—has gotten into the act with its purchase of Altera, one of the leading FPGA manufacturers. Recently, I had an opportunity to talk with Dan McNamara, general manager of Intel’s Programmable Solutions Group (PSG)—what was once known as Altera—who shed some light on Intel’s plans in this area and gave more detail on the company’s plans for connecting different kinds of cores and different die together in high-speed chip packages.


“The world is going heterogeneous,” McNamara said, noting that there is now a common realization that you can’t solve all problems with general-use cores. Custom ASICs—such as Google’s Tensor Processing Units or TPUs can accelerate certain kinds of functions well beyond traditional CPUS or GPUs, but these take a long time to create. In contrast, he said, FPGAs allow for customizable code that gives much of the performance benefits of ASICs without waiting two years for chip design and manufacturing. A developer can change algorithms within an FPGA immediately, while a CPU, GPU, or custom chip works in a fixed way.


McNamara also said FPGAs are very low-latency and can be highly parallel, with different parts of a chip working concurrently on applications such as image processing or communication.


Intel is now shipping the Arria 10 FPGA, manufactured on TSMC’s 20nm process, and offers a package that combines a Xeon (Broadwell) processor and the Arria 10. This is in use in applications such as web-scale search and analytics. McNamara said FPGAs could accelerate search by up to 10 times and noted that Microsoft has been public about using such FPGAs to accelerate search.


One big area of improvement lately has been in creating faster multi-chip packages that can combine chip dies created on different processes and perhaps from different makers. These include packages that contain a CPU and an FPGA, such as the Xeon/Arria combination; an FPGA with different transceivers, as in Intel’s Stratix 10 FPGA; or even different parts of a full CPU, as Intel described in its recent technology and manufacturing day.




Intel has created a new technology called embedded multi-chip interconnect bridge (EMIB) to do this, which debuted in the Stratix 10. In EMIB, the core die is created on Intel’s 14nm process and the transceivers on TSMC’s 16nm process.


Overall, McNamara said that several areas are moving toward adopting more FPGAs using such packaging. He talked about hyper-scale web sites, which are seeing demand change rapidly and where an FPGA/CPU combination may work well in areas like search, analytics, and video streaming, as well as network transformation, where trends such as software-defined networking and network functions virtualization are driving a need for more packet processing. Other focus areas include 5G and wireless applications, autonomous driving, and artificial intelligence (AI) applications. In AI, McNamara said optimized ASICs and raw computer power may well be best for training (Intel has purchased Nervana), but said FPGAs are often best at inference, because of their flexibility and low-latency, and noted that ZTE used Arria 10s to show very impressive image recognition scores.


Personally, I’m curious to see if future CPUs really will take different components and mix and match them using EMIB or a similar technology to change what we think of as a processor chip. I’m intrigued by the idea that systems of the future may use lots of different cores—some programmable (FPGA) and some fixed (a mix of both custom ASICs and traditional CPUs and GPUs) to do things together that improve on what any single technology can do on its own.

http://www.pcmag.com/article/352959/intel-sees-expanding-role-for-fpgas-heterogenous-computing

Continue Reading

Living With an HP EliteBook 1040

For the last couple of months, I’ve been traveling with an HP EliteBook 1040, the company’s high-end corporate notebook. I’ve found it offers many great features, including a very nice touch screen and an innovative way to protect the display from prying eyes, but there are some trade-offs associated with the machine that give me pause.


The machine I carried is the third generation of the 1040, and HP positions it as a corporate laptop that blends the features IT departments need with the look and feel professionals want, at a time when personal and professional lives are blurring. The 1040 does look like a premium notebook, with a silver-colored aluminum case and thin profile. HP says it meets a number of endurance specs, so it should be durable, and it certainly feels well-built.


Indeed, HP says that at 15.8mm thick and with a starting weight of 3.15 pounds, the 1040 is the thinnest 14-inch business notebook. It measures 13.3 by 9.2 by 0.65 inches, which makes it a bit thinner—though somewhat larger—than the ThinkPad X1 Yoga (13.1 by 9.0 by 0.7 inches).



The 1040 looks very nice, and the notebook is easy to open. It has all the ports you’d expect, including two traditional USB 3.0 Type A ports, as well as a USB-C port, though without Thunderbolt or the ability to charge the laptop through the port. Instead, it uses the same charger as earlier EliteBooks. The machine also offers full-size HDMI out, which I find useful; a smart card reader; and a connector for attaching the included Ethernet port and VGA out (which is a bit awkward but not surprising when wired Ethernet and VGA seem to be missing from most notebooks). One thing I missed is an SD card or mini-SD card reader.


The unit I tested had an Intel Core i7-6600U processor running at 2.6GHz, 16GB of DRAM, and 256GB of SSD storage; it seemed quite responsive on all the tasks I tried.


The display is perhaps the most innovative piece of the laptop. The unit I tested had a 1,920-by-1,080 Full HD display, though HP does offer models with QHD displays (2,560-by-1,440 pixels). It includes a touch screen, which I found quite responsive. But what makes it stand out is SureView, HP’s integrated privacy screen. You’ve likely seen third-party filters that cover a laptop screen and make it harder to view from an angle, but SureView is integrated into the 1040. It uses a lighting-control prism and proprietary backlight to make the screen harder to read if you are 35 degrees from center. You can turn this on or off using a function key.


I like the concept very much, but I found the implementation lacked a bit. When turned on, SureView does a pretty good job blocking the display from an angle, but at the expense of making straight-on viewing much darker and much harder to read. You can increase the brightness, but that increases visibility from the side, and I still found the screen not as comfortable to view as I would have liked. As a result, I found I didn’t use the feature much. Even when SureView is off, the screen just didn’t look as sharp or as bright compared to other premium laptops I’ve tried.


HP EliteBook 1040


For multimedia and web conferencing, it has Bang & Olufsen speakers and a dual array microphone, with software that aims to block ambient noise. It includes a webcam on the top of the screen, which provides a much better angle than the webcam on the bottom, as seen in the competing Dell Latitude 13 7000 series, and in the 7360 I used recently. In general, this seems tuned for web and audio conference more so than for media, but it makes sense given the target audience.


HP says the keyboard features a 1.5mm travel distance and a consistent force displacement curve, reducing strain when typing. All I can say is that the keyboard had a very nice feel, and I particularly appreciated the very large touch pad. (Although I had to learn that if you tap the touchpad twice in the upper-left corner, it turns the mouse on and off.) In general, it’s a pleasure to type on the EliteBook 1040.


For security, an interesting feature is SureStart, which certifies the BIOS hasn’t been compromised and verifies the integrity of the BIOS while the PC is running. It’s a nice extra step. The unit has a TPM chip (standard now in all corporate laptops of this class) and a hardened fingerprint reader (which uses its own encrypted memory) that supports Windows Hello.


The 1040 did quite well in performance tests, and the Core i7-6600 (Skylake) processor running at 2.6GHz (with a turbo mode of up to 3.4GHz) had the fastest results of any laptop I’ve tested on complex tests in applications like Matlab. In day-to-day use, it seemed very fast at just about every standard business task, though that’s been true of every notebook based on a full-power Intel chip I’ve tried for several years. As usual, note that no machine in this class has discrete graphics (because it would be too hot), so this isn’t the kind of machine you want if you spend your day running workstation-style applications or high-end gaming.


Battery life is, as always, dependent on what you are doing. With the screen at maximum brightness and Wi-Fi turned on, the battery lasted 2 hours and 50 minutes on PCMark 8, a bit less than I saw with the ThinkPad Yoga X1 (which had a noticeably brighter screen). In a test using Chrome automatically reloading a page every 60 seconds over Wi-Fi at maximum display brightness and using Windows’ high-performance power setting, it lasted 3 hours and 12 minutes, about 15 minutes more than the ThinkPad. PCMag’s test showed it getting 6 hours and 55 minutes on its rundown test, notably behind the 10 hours the ThinkPad and the Latitude endured. In general, I thought battery life was fine most days (particularly with SureView turned off), but I found myself worrying about power toward the end of the work day.


Lenovo ThinkPad Yoga X1


Despite the relatively low starting weight, a concern I do have with the unit is the weight. The unit I tested came in at 3 pounds, 10.4 ounces due to the touch screen and the SureView option. In contrast, Lenovo’s ThinkPad Yoga X1 (above), with a 14-inch WQHD OLED touch screen display that flips completely around, comes in at 2.8 pounds–and that extra pound makes a big difference if you’re carrying it around all day. The 13-inch Dell Latitude 7000 series (7370) has a higher-resolution screen but an awkwardly positioned webcam. (Note the 14-inch Dell Latitude 7000 (E7470), which I haven’t tried myself, has the webcam in the normal position.)


I found a lot to like about the EliteBook 1040; it’s fast and responsive, well-designed for web conferencing, and has some unusual security features, like the SureView screen, which may make this particularly attractive for security-conscious organizations. But on the downside, battery life, weight, and the look of the screen in normal use aren’t quite what I hoped for in a high-end corporate notebook.


For more, see PCMag’s full review of the HP EliteBook 1040.

http://www.pcmag.com/article/352935/living-with-an-hp-elitebook-1040

Continue Reading

Digital Transformation Requires a Strong Culture, Leadership

Last week, I attended IDG’s Agenda Conference, where a group of CIOs gathered to discuss “digital transformation” and new initiatives to use technology to fundamentally change the way their companies do business. A number of executives shared their stories, mostly focusing on leadership during the technology transformation process. I found their stories quite interesting, as they show how organizations are using new technologies—cloud, mobile, analytics, and even VR and AR—to make big changes.


The theme of the conference may well have come from an opening address by Jeff Howe, co-author of Whiplash, with Joi Ito of the MIT Media Lab. Howe talked about how new technologies are here, but often haven’t been adopted yet. He covered various principles that guide the adopting of technology, including how emergence is more important than authority, or how collective decision-making is often more important than an individual. Howe also discussed the idea of a “compass,” or general goal, being more important than “maps”—a specific way of getting to the goal; how practice is more important than theory; and how diversity expressed through practices such as crowdsourcing is more important than ability.


Howe seemed to be suggesting that we are going through a period of extreme change, and when I challenged him on that, he said it’s worth examining, though he believes we have had major discoveries that are just beginning to break through, such as CRISPR/Cas 9 in genetic engineering. I just started Whiplash, and am finding it quite interesting.


Building a Digital Belief System



One of the best talks came from Ganesh Bell, Chief Digital Officer of GE Power, which has $28 billion in revenue and creates the turbines that generate one-third of the world’s electricity. I’ve heard the basic GE pitch before but it remains fascinating. The concept is to use software to transform the core of your offering, replacing processes with software and atoms with bits.


One core piece of this, Bell said, is creating “digital twins,” or virtual copies of physical assets, which are then combined with thermal, physical, and operational models to construct new models and new operating processes for entire industrial settings. Bell said GE had to create its Predix software as an “IoT platform for industry” and is using it to understand assets across industrial settings. He said what ends up being needed, in addition to such a platform, are edge applications and industrial cyber security; the result is that each wind turbine can now generate 5 to 10 percent more electricity. All of this is sold as software-as-a-service or outcome-as-a-service, creating a new $4 billion franchise for the company.


Bell had a lot of advice for other CIOs, including his belief that you need to “build a digital belief system” that the entire “C-suite” (executive management) of the company buys into as key to the future of the organization. He also said companies should not just “digital whitewash” your products and services, but instead “re-imagine” them, and that this involves transforming culture, metrics, and talent.


Bell joined a panel on creating a culture of transformation moderated by IDG’s John Gallant, the conference host, who asked what about culture makes digital transformation so hard.


Gallant, Alteri, Bell, Kiser, Labelle


During this panel, Gina Altieri, Chief Strategic Integration and Enterprise Vice President for Corporate Services, Nemours Children’s Health System, talked about how her group is working to bring the experience of children’s health care into the digital world. She emphasized that it’s crucial to partner with the business side, rather than have functional but siloed teams.


Georgette Kiser, Managing Director and CIO for The Carlyle Group, talked about breaking down hierarchies and the importance of collaboration at all levels of the organization.


George Labelle, CIO at Independent Purchasing Cooperative, which handles procurement and supply for 30,000 Subway franchises, said it’s important to get out of old processes, mindsets, and assumptions. “Things that worked 10 years ago don’t work today,” he said. Labelle noted that it takes time to change culture, and said that when his organization switched from waterfall to agile it took six months for productivity to increase.


Bell talked about how it’s important to recognize that digital transformation is a journey, and that the organization doesn’t know where it is ultimately heading; to that end, he suggested planning 2-3 steps ahead, and not 10. He talked about how one industry can learn from others, and the importance of testing, experimentation, quick failure, and learning.


All of these CIOs discussed the importance of being able to test and experiment; and they all came back to the idea that leadership is the key element in any transformation.


Hospital as a Startup


Stephen Klasko


Stephen K. Klasko, M.D., M.B.A., President and CEO, Thomas Jefferson University and Jefferson Health, gave an interesting talk about taking a 192-year-old university and making it work like a startup. Klasko gave a number of examples of how his hospital is changing the health care system, which he thinks is very much broken. (He wrote a book he initially called I Messed Up Health Care but later renamed We Can Fix Healthcare.)


In his talk, Klasko gave a number of examples of things they have implemented at Jefferson, including “Virtual Rounds” conducted via videoconference and JeffConnect, which provides virtual and video appointments, as well as a number of new processes to better check up on patients and avert re-admission.


Klasko’s general suggestions include to stop “incrementalizing” and instead think about things that will be obvious 10 years from now and do these today; to think about disruption and dislocation and what customers really want, instead of what they say they want; and to consider incentives. “It is hard to get someone to do something when their salary depends on them not doing it,” he added. All of these Klasko couched within the health care system, and he talked about how most of the debate on health care focuses on getting more people access to a broken, inefficient system, rather than on fixing the system itself. For instance, he said medical errors are now the third most common cause of death, and though there has been an incentive to reduce that number, it has not been to eliminate the errors altogether.


Klasko called for an “extreme makeover of medical education” and said technology may replace 80% of what doctors can do. “Any doctor who is worried that a computer will replace them, should be,” he said. Instead of focusing on which students have done the best in organic chemistry, he said, hospitals should pick students who show empathy, creativity, and communications skills. In the 21st century, he said, it won’t be knowing the answers that defines intelligence but asking the right questions.


Tech and Hospitality


George Corbin


George Corbin, Senior Vice President, Digital, at Marriott International, talked about how technology is impacting the hospitality industry. “We have reached an inflection point” with Generation X and younger travelers likely to account for 76 percent of room nights by 2018, with millennials alone forecast to account for half of all travel purchases by 2020, he said.


Marriott’s goal is to “win the booking; win the stay,” through a better website and better mobile app. Of particular importance, he said, are “halo moments,” or those moments that have a disproportionate impact on the likelihood of a guest staying again. Corbin said the goal is to identify these moments that matter the most and accomplish them flawlessly. The first emphasis was on digital booking, and Corbin said that this has been successful, with bookings on the digital platform up 11% year over year. The new focus is on winning the stay and targets the lifetime value of a customer. There are still “jobs to be done” here, in areas ranging from digital check-in and check-out, to service requests, to the ability to connect a customer’s device to the television in his or her room, to beacons which send personalized messages to guests.


Overall, Corbin said, this is not about technology, but about transforming a service model. He talked about the need to create clarity and urgency around the problem; make the destination clear, relatable, achievable, and inspiring; break down silos; align goals; use the familiar parts of the business that work well as a way of implementing new services; and use pilot programs to test and “de-risk” innovation.


Examples he gave included using new signage to signal and promote the mobile check-in experience, as well as aligning service requests with room service. The big challenge, he said, has been scaling new services to 600 hotels in 110 countries, and making sure these changes harmonize with operations. This is still a work in progress.


The real threat is not technology, revenue, or market share, Corbin said, but relevance.


Data, Analytics, and the IoT


Another interesting panel – and a big topic of discussion among the attendees – was the increasing use of data and analytics, particularly regarding the Internet of Things (IoT).


Brett Bonner Kroger


Brett Bonner, VP of R&D and Operations Research for The Kroger Co., talked about using this technology to attack the problem of food-borne illness, which he said impacts one in six Americans each year.


In a project Kroger—which runs a variety of supermarkets and similar establishments—eliminated paper logs of temperature reads and instead deployed more than 1 million temperature tags over two years that can rapidly alert personnel if a read is above a specified temperature. More IoT programs are on the way, including scanner tags for customers, and digital displays for pricing and other messages. Bonner said the overall plan is to save 9 million shoppers a day an average of four minutes each. This involves building IoT gateways in the ceiling of each store, and creating a mesh network to connect all sorts of devices—ranging from handhelds to temperature-sensing tags—using Zigbee.


Larry Reuwer


Larry Reuwer, Global Supply Chain IT Production Strategy Lead, at Monsanto, talked about how the farm is growing more digital, and said there is a need for increased farm production to feed the world as the amount of farmable acres globally decreases as the population grows. He said Monsanto is interested in the entire journey of seed corn from the field to the processing facility to the farm, with different instrumentation and software used in each stage. For instance, Reuwer described using techniques like sensors that measure temperature, pressure, and location in the back of corn seed trucks to make sure that the seed arrives to the farm in good condition.


Jianyan Lai


Jianyan Lai, SVP & Senior Architect of the Dalian Wanda Group, said the firm now operates 187 shopping malls in China with plans for 200 more by 2020. Each 150,000 square meter mall tends to have 3,000 piece of equipment, 100,000 lighting fixtures, and 53 elevators and escalators, so there is a lot of data. To handle this, the company created the Huiyun intelligent management system, which is an overarching system that manages 16 discrete functions (such as fire and security). Over time, Huiyun has developed into a cloud system that provides a single centralized and integrated platform, which helps improve the user experience while keeps down costs.




A couple of panels focused on specific technologies. In a panel on virtual reality and augmented reality, a number of academic CIOs discussed how new technologies are really impacting the way they deliver information, particularly when it comes to health education.


William Confalonieri, Chief Digital Officer at Deakin University in Australia, talked about using AR in the school of medicine to show a cross-section of a heart to compare with an electrocardiogram, and how this has led to students having a better understanding of what is happening when an ECG shows an abnormal heart rhythm. This is now being applied to other disciplines, including optometry, he said.


Eric Whiting, Director of Scientific Computing at the Idaho National Laboratory, talked about virtual reality, including simulating the inside of a nuclear reactor using a supercomputer, and then displaying the VR result on a smartphone. Other applications he mentioned included using LIDAR to capture information from transmission lines, 3D protein folding, and interaction with electron cloud density.


Sue Workman, Vice President for University Technology and Chief Information Officer at Case Western Reserve University, talked about a project with the Cleveland Clinic in which they replaced a traditional cadaver lab with augmented reality demos using Microsoft HoloLens, with applications including viewing how muscles work on top of a skeleton, looking inside the heart, and anatomy. She said augmented reality offers the opportunity for a “massive disruptive change in how we do learning and training.”


During the panel, I was interested in Workman’s suggestion that it isn’t the technology that’s the issue, but the big required investment in subject matter expertise. The other panelists agreed that technology will change and evolve, and urged the audience to get involved early on.


Piddington, Guereque, Schulze


Another panel addressed analytics. Jose Güereque, IT & Innovation Director at Arca Continental, talked about using a “big data” project to discover new information for its business supplying consumer products to small stores in Latin America. By cross refencing information from different departments and then complementing it with external information on things such as weather, events, and microeconomics, the company has been able to better predict which products to push in which stores.


As part of the discussion, Güereque said that the most difficult part was not the technology, but changing the culture so that salesmen adopt new techniques.


Trevor Schulze, CIO and Vice President of IT at Micron Technology, talked about using data science to analyze all of the data from the complex machines the company uses to produce memory chips to improve yields, in turn resulting in improved profits. Schulze said the company had to create its own system for linking all of the various pieces together, as he couldn’t find a commercial solution which could handle the scale of the data, but that this now gives the company a competitive advantage. The concept is now used in process automation and in supply/demand matching. Schulze said this is “not an IT project” but rather a project that addresses a business problem and requires working closely with business groups within the firm.


Panel moderator Ken Piddington, CIO & Executive Advisor at MRE Consulting, talked about using analytics and machine learning to keep IT running, as downtime for consultants hurts revenue. He also talked about using sensors to track specific equipment, and doing things like predictive maintenance.


Peter Stone


Peter Stone, a professor of computer science and robotics at the University of Texas at Austin and chairman of the AI 2030 project, talked about a hundred year study on AI and its impact on our lives.


As part of this project, every five years a group will look at where AI is headed. In 2015, Stone chaired a group that looked at possible advances in AI over the next 15 years and their potential influence on daily life. This study, which was published in 2016, identified eight areas of likely impact by 2030.


The study predicts that transportation will be the first domain where the public will be asked to trust AI on a large scale (in the form of autonomous vehicles), but suggests there is also a big opportunity for AI in health care, particularly in clinical decision support. Stone suggested AI for predictive analytics, if integrated with human care, could improve health outcomes, but only if the system can be trusted. On the hot topic of AI’s impact on employment and the workplace, he said that in the near term, AI technology will replace tasks rather than jobs, and will also create jobs, though it is always harder to imagine what types of new jobs will be created. Overall, he said, AI ought to lower the cost of goods and services and make everyone richer. He said the fear of AI replacing all human jobs in one generation is drastically overblown, but did say that the gap between rich and poor could grow.

http://www.pcmag.com/article/352839/digital-transformation-requires-a-strong-culture-leadership

Continue Reading

Galaxy S8 Looks Great, But Bixby, DeX Are the Real Questions

Let’s get this out the way first: Samsung’s Galaxy S8 and S8+, which were announced this week, appear to be great phones, with terrific-looking screens and sleek new design. But despite promises of revolutionary design, what really stood out to me were: the new voice assistant, known as Bixby; the promise of the phone connecting all of the devices in your home; and DeX, a dock that lets the phone act as a desktop computer.


All of these are intriguing concepts—but they still come with big questions, as the concepts have been tried before with limited success.



At the launch, DJ Koh, president of Samsung’s mobile communications business (above), argued that the S8 marked “the beginning of a new era of smartphone design.” I’ve heard similar comments from all the big phone makers this year, probably because everyone wants to be seen as doing something radically different. But it still seems like the basic paradigm of the smartphone has been set and no one can deviate too far from it.


The “infinity display” on the 5.8-inch S8 and the 6.2-inch S8+ looks great, though I’m a bit surprised that Samsung touts this as being brand new, since the curved edges on the side were a big part of last year’s Galaxy S7 edge. I admired the curved edges of that phone, and this looks like a continuation of that theme, albeit with smaller top and bottom bezels and the replacement of the physical home button with one that is embedded beneath the display.


Still, the inclusion of this design on both the standard S8 and the larger S8+ takes a niche feature mainstream, and the other design tweaks make the phone look particularly nice. I do like the idea of making the display taller so it still fits well in your hand—a concept also seen on the LG G6announced at Mobile World Congress. I was particularly impressed by how well the S8+ fit in my hand. I do want to see how real videos look on the 2,960-by-1,440, 18.5:9 ratio display, but at first glance, the Super AMOLED display looked great.



Last year’s Galaxy S7 had a terrific camera, but I was surprised to see how few camera upgrades Samsung added, given how competitors have been moving toward dual-camera setups for better zoom (such as on the iPhone 7 Plus) or wide angle shots (such as on the G6).


Samsung did talk about a “multi-frame processor” to reduce noise and increase brightness, but that may not be very visible in daily use. Instead, Samsung seems to be focusing more on the front-facing camera; there it added an 8-megapixel sensor with autofocus and face detection. The company says the average person will take 25,000 selfies in a lifetime, so I suppose this makes sense, even if it’s not the way I typically use a smartphone.


The other hardware features look impressive, like the use of the Qualcomm Snapdragon 835 in US models and the Exynos 8895 in most overseas markets. These should be the first phones with 10nm processors to make it to market when they go on sale on April 21, and this should enable faster performance and lower power consumption. I’m also interested in Iris detection providing a higher level of security, as well as face detection making it easier to unlock your phone. (The phones still have the now-ubiquitous fingerprint detection, though it has moved to the back of the phone next to the camera; I’m not sure how convenient that will feel in actual use.)


But again, the things I thought were most interesting were the things that won’t really be tested until the phone is released.



The biggest wildcard is probably the Bixby assistant, which Samsung said differs from other voice assistants in that it is “context-aware”—Bixby knows what is happening on the screen of the device—and by its integration of voice and touch. This involves integration with Samsung’s own apps, but also third-party apps, and I’ll be curious to see how well it works. (Bixby wasn’t available to test at the launch). You can launch Bixby by clicking on a button on the side of the phone.


At the launch, I could see the various tiles Bixby creates that you reach by swiping from the right of the home screen, which looked good, though they looked very similar to the cards that Google Now enables (on the Google Pixel and other phones).






Of course, the phone will also come with Google voice support, meaning it will have two forms of voice assistants, which may be confusing. Koh went out of his way to extol Samsung’s partnership with Google on the phone, reflected on the Android base, and said “Google has been by our side the entire time.”


The next big feature is Samsung Connect, a way to control Samsung and SmartThings devices through Bixby on your phone. Samsung makes a good case for the phone as a logical place from which to control all of the Internet of Things (IoT) devices we have, but the world is not made up entirely of Samsung devices. While SmartThings had a fair number of IoT integrations available, Samsung has a long way to go before it gets to the number of devices supporting the skills we see with something like Amazon’s Alexa. And we could talk for a long time about the security implications of all of these IoT devices.



And then there is DeX, which, if it is properly implemented—and that remains an “if”—could be the most exciting feature. DeX promises to let you plug the Galaxy S8 into a small dock, into which you can plug a monitor or TV. With the attachment of a wireless keyboard and mouse, you could then use this as a desktop.


The idea of turning a phone into a desktop or laptop is far from a new one: Palm was discussing it a decade ago; Motorola shipped the Atrix in 2011 with the same concept; Sentio is a “universal laptop shell” for Android devices; and Microsoft has touted its Continuum feature, designed to turn Windows Phones into PCs, exemplified by its Lumia 950 and HP’s Elite X3. None of these has really been successful.






Samsung hopes this time will be different, and has made the devices easier to use and the docks more common. Effectively, when you plug an S8 into a dock, it runs the tablet version of the application instead of the phone version, and does so in an Android interface that supports multiple, resizable windows as well as the family cut, copy, and paste features. Yet it uses the same files on the phone and runs the phone versions of the applications, so you always have your data with you.


I was able to spend a few minutes with versions of Word, Excel, and PowerPoint, as well as Adobe Lightroom, all of which appear to work well. Samsung says more applications will be adapted for the new feature, and that Citrix, VMware, and Amazon VDI environments will be supported as well—important for this to work in a corporate environment. Executives told me the company plans to have docks available in public places—airports and hotels, for example—so that travelers can really take advantage of the capability. Again, it sounds great—but it’s all about the execution.


The company also touted new VR features, including a new version of its Gear 360 camera that looked very good, and a new motion controller.


Overall, I came away quite impressed by the S8 itself as a device, but even more interested by what the other features could mean in terms of changing the way we use these devices. I do believe we are heading toward much more use of voice assistants, so I’m looking forward to seeing if Bixby can really do a lot more for me, but I’m a bit skeptical about the IoT connections and hopeful but uncertain as to whether DeX can really bring about the convergence of phone and PC that people have long talked about. I look forward to trying it all out.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/352759/galaxy-s8-looks-great-but-bixby-dex-are-the-real-questions

Continue Reading

Intel’s 10nm Process: It’s More Than Just Chip Scaling

In a series of presentations yesterday, Intel gave many more details on its forthcoming 10nm process for making advanced processors, disclosed a new 22nm FinFET process designed for lower power and lower cost devices, suggested a new metric for comparing chip nodes, and generally pushed the idea that “Moore’s Law is alive and well.” What stood out most to me was the idea that even though processors will continue to become more dense, the difficulty and cost of the new process nodes will force a complete re-think of how chips are to be designed in the future.


Mark Bohr, Intel Senior Fellow and director of process architecture and integration, gave Intel’s usual pitch about how it leads the semiconductor industry in process technology. He said Intel continues to have about a three-year lead over its competitors, even though chip foundries such as Samsung and TSMC are in the midst of rolling out what they call 10nm processes before Intel’s 10nm products come out towards the end of the year. Bohr said Intel introduced most of the industry’s main advances over the past 15 years, including strained silicon, high-k metal gate, and FinFET transistors (which Intel originally called Tri-Gate, though it has since returned to using the industry standard name).



Bohr said that the node numbers used by all of the manufacturers are no longer meaningful, and instead called for a new measurement based on the transistor count divided by the cell area, with NAND cells counting for 60 percent of the measurement and Scan Flip-Flop Logic cells counting for 40 percent (to be clear, he’s referring not to NAND flash memory cells, but rather to NAND or “negative-AND” logic gates). This gives you a measurement in transistors per square millimeter, and Bohr showed a graph reflecting Intel’s improvements on such a scale, ranging from 3.3 million transistors/mm2 at 45nm to 37.5 million transistors/mm2 at 14nm, and moving to over 100 million transistors/mm2 at 10nm.


In the past few years, Intel has been using gate pitch times logic cell height as a measurement, but Bohr said this no longer captures all of the advances Intel is making. He said that measure remained a good relative method of comparison, but didn’t give a hard number.


Logic Area Scaling


Bohr said that even though the time between nodes was extending—Intel is no longer able to introduce new nodes every two years—the company is able to achieve better than normal area scaling, which Intel calls “hyper scaling.” He showed a chart demonstrating that at both 14nm and 10nm Intel was able to make the logic area 37 percent the size of the logic area at the previous node.


Die Area Scaling


Bohr noted that other parts of a processor—notably static random-access memory and input-output circuitry—aren’t shrinking at the same rate as logic transistors. Putting it all together, he said the improvements in scaling will allow Intel to take a chip that would have required 100 mm2 at 45nm and make an equivalent chip in just 7.6 mm2 at 10nm, assuming no change in features. (Of course, in the real world, each subsequent generation of chip does add more features.)


Stacy Smith, Intel’s executive vice president for manufacturing, operations, and sales, said that as a result, even though it is taking longer between nodes, the additional scaling has resulted in the same year-on-year improvements that the former two-year cadence provided over time.


Ruth Brain, an Intel Fellow and director of interconnect technology and integration, talked about the company’s existing 14nm technology, which started manufacturing in 2014, and said that it was similar in density to the 10nm products others are starting to ship this year.


She explained how this process introduced “hyper scaling,” in part by using a more efficient multi-patterning technique to create finer features than the 80nm or so lines that the current 193nm immersion scanners can create in a single pass. Intel said that by using a technology called “self-aligned double patterning” (SADP), rather than the Litho-Etch-Litho-Etch method that other manufacturers use, it can get more accurate and consistent results leading to better yields and performance.


Overall, Brain said the use of hyper scaling results in 1.4 times more units per dollar than traditional scaling would allow, and that results in roughly the equivalent of the savings Intel would have gotten had the industry moved from 300mm to 450mm silicon wafers (a switch that was widely discussed, but seems to have been abandoned for now).


10nm Hyperscaling


Kaizad Mistry, a corporate vice president and co-director of logic technology development, explained how hyper scaling techniques are being used at 10nm, and gave more details on the company’s 10nm process, which he described as “a full generation ahead” of other 10nm technologies. Overall he said that the 10nm node will deliver either a 25 percent improvement in performance at the same power or an almost 50 percent reduction in power at the same performance compared to the 14nm node.


Mistry described Intel’s process as using a gate pitch of 54nm and a cell height of 272nm, as well as a fin pitch of 34nm and a minimum metal pitch of 36nm. Essentially, he said this means you have fins that are 25 percent taller and 25 percent more closely spaced than at 14nm. In part, he said, this has been accomplished by using “self-aligned quad patterning,” taking a process Intel developed for 14nm multi-patterning and extending it even further, in turn enabling smaller features. (But I would note this seems to indicate that gate pitch isn’t scaling as fast as in previous generations.)


Two new hyper scaling advances have helped as well, he said. The first of these is “contact over active gate,” which means that the location where a gate crosses a fin to create a transistor is now directly over the top instead of just below it. He said this gave another 10 percent area scaling above pitch scaling. The second technique, which Mistry said had been used before but not with FinFET transistors, is called “single dummy gate.” In the 14nm generation, he said, Intel’s transistors have had full “dummy gates” at the edge of each logic cell; at 10nm, however, Mistry said there is only half a dummy gate at each edge. This provides another 20 percent effective area scaling benefit, he said.


Together, Mistry said, these techniques allow for a 2.7x improvement in transistor density, and enables the company to produce over 100 million transistors per square millimeter.


Mistry also made it clear that, as with 14nm, the expanding length of time between process nodes has made it possible for the company to enhance each node a bit each year. Mistry described in general terms plans for two additional nodes of 10nm manufacture with improved performance. (I did find it interesting—and a little worrisome—that, although these charts show the 10nm nodes clearly requiring less power than the 14nm nodes, they suggest that the first 10nm nodes will not offer as much performance as the latest 14nm ones.)


10nm++ Technology Enhancements


He said the 10nm++ process will deliver an additional 15 percent better performance at same power or 30 percent power reduction at same performance compared to the original 10nm process.


Later, Murthy Renduchintala, president of the client and IoT businesses and systems architecture group, was more explicit, and said the core products are aiming for a better than 15 percent performance improvement every year on an “annual product cadence.”


22 FFL


Bohr returned to describe a new process called 22 FFL, meaning 22nm processing using low-leakage FinFETs. He said this process allows up to a 100x reduction in power leakage compared to conventional planar technology, and would have higher density than any other 22nm process, along with the possibility of higher performance FinFETs. What’s interesting here is that a chip design can use two different kinds of transistors within a single chip; high-performance transistors for things like application processing and low-leakage transistors for always-on-always-connected circuits.


This may be designed to compete with other 22nm processes, such as Global Foundries’ 22nm FDX (silicon-on-insulator) process. The idea seems to be that by going with 22nm, you can avoid the double patterning and additional expense that tighter nodes require, but still achieve good performance.


Mix and Match


Renduchintala talked about how as an integrated device manufacturer (IDM)—a company that both designs processors and manufacturers them—Intel has the advantage of a “fusion between process technology and product development.” The company is able to choose from multiple types of IP and process techniques, including picking transistors that suit each part of its design, he said.


What I found most interesting was his discussion of how processor design was moving from a traditional monolithic core to a “mix and match” design. The idea of heterogeneous cores is nothing new, but the idea of being able to have different parts of a processor built on dies using different processes all connected together could be a big change.


Enabling this is the embedded multi-interconnect bridge (EMIB) that Intel started shipping with its recent Stratix 10 FPGAs technologies and discussed using in future Xeon server products at its recent investor day.


Renduchintala described a future world where a processor might have CPU and GPU cores produced on the latest and most dense processes, with things like IO components and communications that don’t benefit as much from the increased density on an earlier process, and other things on even older nodes. All of these dies would be connected using this EMIB bridge, which allows faster connections than traditional multi-chip packages, but is lower-cost compared to using a silicon interposer.


If all of these things come to pass, the entire framework of new processors could change. From getting a new processor made entirely on a new process every couple of years, we may be heading toward a world that involves a much more gradual change of process technology in only parts of the chip. This also opens up the possibility of adding many more things to the chip itself, from integrating more IO components, to different kinds of memory. In the long run, this could signal big changes in how chips—and the systems they power—work.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/352738/intels-10nm-process-its-more-than-just-chip-scaling

Continue Reading

Is Gigabit LTE in your future?

While 5G was everywhere at Mobile World Congress, we still don’t have a standard, and it will take even longer before we have phones that support it. On the other hand, phones which support a gigabit connection over the existing LTE standard were unveiled at the show, promising faster connections.


One thing to understand about Gigabit LTE connections is that they require two things—a phone with a modem that supports an advanced amount of Carrier Aggregation or CA (the ability to use multiple groups of spectrum at the same time)—and a network that supports these connections, meaning that it has the spectrum available. Of course, your data plan will likely limit how many high-speed connections you can use. But the idea isn’t really to be downloading content at a gigabit per second for an extended period of time, but rather to get the content you need very quickly and then get off the network. This reduces the load on the network and boosts network capacity, as well as enables your modem to stop transmitting, which saves power.


Qualcomm introduced the first “Gigabit LTE” modem, the Snapdragon X16 chipset, in the run-up to last year’s show; this technology is now part of the Snapdragon 835 application processor.


This technology includes support for what is known as 256-QAM digital signal processing, meaning it can pack more bits per transmission; support for 4X4 MIMO, so it can receive data on four antennas, as well as support up to four 20MHz blocks of spectrum using Carrier Aggregation (4x20MHz CA). This modem supports both licensed and unlicensed spectrum using LTE-U (for LTE Unlicensed), which is supported by a variety of operators in the U.S., Korea, India and other markets.


Technically, it also supports LTE-Advanced, specifically LTE Category 16 for downloads, with a theoretical peak of 1 gigabit per second, and Category 13 for uploads, with a theoretical peak of 150 megabits per second. (Note that the current Snapdragon 820/821 used in many of today’s top phones uses the company’s X12 modem, which has a theoretical capability of downloads at 600Mbps. In the real world, congestion and spectrum get in the way.)


Qualcomm Gigabit LTE test


A couple of companies demonstrated phones using the Qualcomm Snapdragon 835 at the show, including Sony, which showed its Xperia XZ premium, due out in June. Qualcomm demonstrated this on the show floor, with downloads of just less than 1Gbps.


LG G6 (MWC Embargo)


ZTE also had its own speed tests on the show floor. Still, it is widely expected that the first phone that will widely ship with this processor is the upcoming Samsung Galaxy S8.


Meanwhile, the big networking providers to the telecom industry are also pushing this technology, and both Nokia and Ericsson talked it up at the show.


Last week, Sprint said it was actually debuting a network that supports Gigabit LTE in New Orleans, using three-channel carrier aggregation and 60MHz of Sprint’s 2.5GHz spectrum; they demonstrated this with an unannounced phone from Motorola Mobility that uses a Snapdragon 835. T-Mobile has also said it plans to roll out a gigabit-capable network in the U.S. later this year, and AT&T has said it expects that some of its sites will also be able to reach that speed sometime this year.


Qualcomm is no longer alone in the space. The Samsung Exynos 8895, which is now being marketed as the Exynos 9, also includes its own gigabit modem that supports Category 16 downloads with a theoretical maximum of 1 Gbps using 5 carrier aggregation and Category 13 uploads of up to 150Mbps uplink using 2CA (Cat 13). Again, this is manufactured in Samsung’s 10nm process and expected to ship with the international versions of the Galaxy S8.


Intel Gigabit LTE modem


In addition, in the run-up to this year’s show, Intel announced its XMM 7560 modem, which supports Category 16 for downloads with a peak theoretical speed of 1Gbps, and Category 13 for uploads with speeds up to 225Mbps. The first modem to be built on Intel’s 14nm technology, it enables 5x 20MHz CA for downloads, 3x 20 MHz CA for uploads, 4X4 MIMO and 256-QAM, and works on both licensed spectrum and unlicensed spectrum, where it coexists with Wi-Fi using a technology called License Assisted Access (LAA), which is primarily used by carriers in Europe and Japan. Intel expects samples in the first half of this year and a move into production soon afterward.


Not to be outdone, Qualcomm showed off its X20 modem, which is even faster, with a theoretical peak speed of 1.2Gbps, support for LTE Category 18 and 5 x 20MHz CA across licensed and unlicensed spectrum using both LAA and LTE-U. The X20 also supports the 3.5GHz Citizens Broadband Radio Service in the U.S. This modem is built on a 10nm process, and Qualcomm says it has begun sampling to customers, with the first commercial devices expected in the first half of 2018. It would seem highly likely that this modem will find its way into an applications processor around that time as well.


You may not need all this speed right now, but it seems necessary to support higher resolution video and VR applications. In the meantime, if it helps deliver content more quickly and can improve the networks, that’s a win for everyone.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/352395/is-gigabit-lte-in-your-future

Continue Reading

Barcelona Plans The World’s Most Diverse Supercomputer

These days, there are a number of different approaches to high-performance computing, systems usually referred to as supercomputers. Most of these systems use a massive number of Xeon processors, but we are starting to see the most interesting new machines run accelerators, such as Nvidia’s Tesla or Intel Xeon Phi. There’s even some talk that massive ARM-based systems could be effective in the future. But what if you could try all of these architectures in one location?


That’s the challenge and promise of the new MareNostrum 4 computer, which is being readied for installation at the Barcelona Supercomputing Center. The new design includes a main system for general-purpose use based on traditional Xeons, plus three new emerging technology clusters, based on IBM Power and Nvidia, Xeon Phi, and ARM-based computing. While I was in Barcelona for Mobile World Congress, I had a chance to talk to Sergi Girona, Operations Director for the BSC, who explained the reasoning behind the four different clusters.


Girona said the center’s main mission is to provide supercomputing services for Spanish and other European researchers, in addition to industry. As part of this mission the center wants to have at least three “emerging tech clusters,” so it can test different alternatives.


For the general computing cluster, Girona says the center chose a traditional Xeon design because it was easier to migrate applications that run on the current MareNostrum 3, slated to be disconnected next week. The design also had to fit the existing space, within a chapel. (I visited the center last year and the current supercomputer a year ago.)



The new design, to be built by Lenovo, will be based on the new Xeon v5 (Skylake), with 3,456 nodes, each with two sockets, and each chip will contain 24 cores each, for a total theoretical peak performance of 11.14 petaflops per second. Most cores will have 2GB of memory, but 6 percent will have 8GB, for a total of 331.7TB of RAM. Each node will have a 240GB SSD, though eventually some will have 3D XPoint memory, when that is available. The nodes are to be connected via Intel’s Omni-Path interconnect and 10GB Ethernet. The system will also have six racks of storage from IBM, with 15 petabytes of storage, including a mix of flash and hard disk drives. Overall, the design will take up 62 racks—48 for computing, 6 for storage, 6 for networking, and 2 for management. It will fill 120 square meters (making for a very dense environment) and draw 1.3 megawatts of power, up from the 1 megawatt drawn by the previous design. Operation is expected to begin on July 1.


MareNostrum 1-2-3-4


One thing I found interesting here is how clearly the move to the new generation demonstrates the progression of technology. The previous generation had a peak performance of about 1 petaflop, and this system should be more than 10 times faster, while using only 30 percent more power. For comparison, the original MareNostrum supercomputer, installed in 2004, had a peak performance of 42 teraflops and used 640 kilowatts of power. (The details of performance improvements over four generations of MareNostrum are in the chart above). Girona says this means that what would have taken a year to run on the MareNostrum 1 can be done in a single day on the new system. Pretty impressive.


For emerging technology, the site will have three new clusters. One will consist of IBM Power 9 processors and Nvidia GPUs, designed to have a peak processing capability of over 1.5 Petaflop/s. This cluster will be built by IBM, and involves the same type of design being deployed in the Summit and Sierra supercomputers, which the US Department of Energy has commissioned for the Oak Ridge and Lawrence Livermore National Laboratories as part of its CORAL Collaboration at Oak Ridge, Argonne, and Lawrence Livermore national labs.


The second cluster will be made up of Intel Xeon Phi processors, with Lenovo building a system that uses the forthcoming Knights Hill (KNH) version and OmniPath, with a peak processing capability over 0.5 Petaflop/s. This also mimics the American CORAL program, and uses the same processors that will be inside the Aurora supercomputer, commissioned by the US Department of Energy for the Argonne National Laboratory.


Finally, a third cluster will be formed of 64-bit ARMv8 processors that Fujitsu will provide in a prototype machine, which is designed to use the same processors that Fujitsu is developing for a new Japanese system to supplant the current K supercomputer. This too should offer more than 0.5 Petaflop/s of peak performance. The exact timing for the beginning of operations on the emerging clusters has yet to be disclosed, Girona said.


Overall, the system will cost $34 million, in a contract won by IBM and funded by the Spanish government. One major reason for having all four types of computing on site is research, Girona said. The center, which employs 450 people in total, has 160 researchers with a focus on computer science, including architecture and tools. In particular, as a member of PRACE (Partnership for Advanced Computing in Europe), BSC is trying to focus on leading performance optimization and parallel computing.


Girona said that BSC wants to influence the development of new technologies, and is planning on using the new machine to analyze what will happen in the future, in particular to make sure that software is ready for whatever architecture the next machine—likely to arrive in about 3 years—will have. BSC has long worked on tools for emerging architectures, he noted.


Another topic researchers are considering is whether or not it would be worth developing a European processor for IT, likely based on the ARM architecture.


Barcelona won’t have the fastest supercomputer in the world; that record is currently held by the Chinese, with the Americans and Japanese trying to catch up. But MareNostrum 4 will be the most diverse, and potentially the most interesting.

http://www.pcmag.com/article/352274/barcelona-plans-the-worlds-most-diverse-supercomputer

Continue Reading

The 10nm Processors of MWC 2017

One of the things that stood out at this year’s Mobile World Congress was the presence of three new mobile application processors—from MediaTek, Qualcomm, and Samsung—that all use new 10nm FinFET manufacturing processes, which promise smaller transistors, faster peak performance, and better power management than the 14 and 16nm processes used in all of the current top-end phones. During the show, we got more details on these new processors, which should start appearing in phones over the next couple of months.


Qualcomm Snapdragon 835


Qualcomm had announced the Snapdragon 835 prior to CES, but at Mobile World Congress, we were able to see the processor in a couple of phones, notably the Sony Xperia XZ Premium, due out in June, as well as an unspecified (but publically demoed) ZTE “Gigabit phone.”


Qualcomm has said the 835 was the first 10nm product to enter production, manufactured on Samsung’s 10nm process. It is widely expected to be in the U.S. versions of the Samsung Galaxy S8, which will be unveiled on March 29.


The Snapdragon 835 uses Qualcomm’s Kryo 280 CPU core cluster, with four performance cores running at up to 2.45GHz with 2 megabytes of level 2 cache and four “efficiency” cores running at up to 1.9GHz. The company estimates that 80 percent of the time the chip will use the lower-power cores. While Qualcomm wouldn’t go into a lot of detail on the cores, the company said that rather than create completely custom cores, the cores are instead enhancements of two different ARM designs. It would make sense that the larger cores are a variation on the ARM Cortex-A73 and the smaller ones on the A53, but in meetings at MWC, Qualcomm stopped short of confirming that.


In talking about the chip, Keith Kressin, SVP of Product Management at Qualcomm Technologies, stressed that power management was a major focus, as it allows for sustained performance. But he also stressed the other features of the chip, which uses Adreno 540 graphics that feature the same basic architecture as the Adreno 530 did in the 820/821, but here provides a 30 percent improvement in performance. It also includes a Hexagon 628 DSP, including support for TensorFlow for machine learning, as well as an improved image sensor.


New in the processor in the company’s Haven security module, which handles such things as multifactor authentication and biometrics. Kressin stressed that what’s important is how all of this works together, and noted that wherever possible the chip will use the DSP, then the graphics, then the CPU. The CPU is actually “the core we least want to use,” he said.


One of the most notable features is the integrated “X16” modem, capable of gigabit download speeds (by using carrier aggregation on three 20 MHz channels) and upload speeds of 150 megabits per second. Again, this should be the first modem to ship that’s capable of such speeds, albeit only in markets where the wireless providers have the right spectrum. It also supports Bluetooth 5 and improved Wi-Fi.


Kressin said the processor will enable 25 percent better battery life than the previous 820/821 chips (manufactured on Samsung’s 14nm process) and will include Quick Charge 4.0 for faster charging.


At the show, the company announced a VR development kit and gave more details about how the chip will better handle VR and augmented reality applications, with an emphasis on improved function in stand-alone VR systems.


It is likely that the Snapdragon 835 will appear in a lot of phones over the course of the year. Kressin said the company was “hitting target yields today” and that it will ramp throughout the year.


Samsung Exynos 8895


Samsung LSI hasn’t been as public about its chip, but used MWC as a way of putting a more public face on its product lines, which will now be branded Exynos 9 for processors aimed at the premium market, and Exynos 7, 5, and 3 for high-end, mid-tier, and low-end phones. The company also makes ISOCELL image sensors and a variety of other products.


Samsung LSI just announced the first Exynos 9 processor, technically the 8895, which will also be its first processor produced on the company’s 10nm FinFET process, which it says provides 27 percent improved performance on 40 percent lower power than its 14nm node. The 8895 is widely expected to be in international versions of the Galaxy S8, though we’re unlikely to see it in the U.S., as the internal modem doesn’t support the older CDMA network used by Verizon and Sprint.


Like the Qualcomm chip, Samsung’s has eight cores in two groups. The four high-end cores use the company’s second generation custom cores, and Samsung said these are ARMv8 compatible but have an “optimized microarchitecture for higher frequency and power efficiency,” though it wouldn’t discuss the differences in any more detail. For graphics, it uses ARM Mali-G71 MP20, which means it has 20 graphics clusters, up from 12 in the 14nm 8890, used in some of the international Galaxy S7 models. This should allow for faster graphics, including 4K VR at up to a 75Hz refresh rate as well as support for video recording and playback of 4K content at 120fps.


The two CPU clusters and the GPU are connected using what the firm calls the Samsung Coherent Interconnect (SCI), which enables heterogeneous computing. And it also includes a separate vision processing unit, designed for face and scene detection, video tracking, and things like panoramic pictures.


The product also includes its own gigabit modem, which supports Category 16, and has a theoretical maximum of 1 Gbps downlink (Cat 16, using 5-carrier aggregation) and 150Mbps uplink using 2CA (Cat 13). It will support 28-megapixel cameras or a dual-camera setup with 28 and 16 megapixels.


Exynos 9 VR Demo


The firm said this allows for better stand-alone VR headsets, and demonstrated a stand-alone headset with 700 pixels-per-inch resolution. I thought the display was notably sharper than on the commercial VR headsets I’ve seen to date, though I still had a bit of the screen-door effect; what stood out was how fast the reaction time seemed given the higher resolution.


MediaTek Helio X30


Mediatek Helix X30 Overview


MediaTek had announced its 10-core Helio X30 last fall, but at the show the company said its 10nm chip had entered mass production and should be in commercial phones in the second quarter of this year.


We got a lot more technical detail at last month’s International Solid States Circuit Conference (ISSCC), but the highlights remain interesting, as this seems likely to be the first chip out using TSMC’s 10nm process.


Mediatek Helio X30 CPU Architecture


The key difference with this processor is its “tri-cluster” deca-core CPU architecture, featuring two 2.5 GHz ARM Cortex-A73 cores for high performance, four 2.2 GHz A53 cores for less demanding tasks, and four 1.9 GHz A35 cores that run when the phone is only doing light duty. These are connected by the firm’s own coherent system interconnect, called MCSI. A scheduler, known as Core Pilot 4.0, manages the interactions among these cores, turning them on and off and working to manage thermals and user experience items such as frames-per-second in order to deliver consistent performance.


As a result, the company says the X30 gets a 35 percent improvement in multi-threaded performance, and a 50 percent improvement in power, compared with last year’s 16nm Helio X20. That’s notably better than what the company claimed at the introduction. In addition, graphics have been improved, and the chip now uses a variation of the Imagination PowerVR Series 7 XT, running at 800MHz, which it says works at the same level as the current iPhone, delivering 2.4 times the processing power using 60 percent less power.


The chip has a Category 10 LTE modem, which supports LTE-Advanced, 3-carrier aggregation downloads (for a maximum theoretical download speed of 450Mbps), and 2-carrier aggregation uploads (for a maximum of 150 Mbps).


While MediaTek said its modems are certified in the U.S., you’re unlikely to see this chip in many phones in this market. That’s because it’s aimed at “sub-flagship” models and Chinese OEMs.


I asked Finbarr Moynihan, MediaTek’s General Manager of Corporate Sales, where application processors go from here, and he said that he expects more focus on user experience, and things such as smooth performance, fast charging, the camera, and on video features.


ARM Looks Forward


At the show, ARM announced that it had acquired two companies, Mistbase and NextG-Com, for software and hardware intellectual property that meets the NB-IoT standard which was part of 3GPP release 13. The company said it would include these technologies in its Cardio-N family of solutions for the Internet of Things. What ARM hasn’t done is announce anything beyond the A73, A53, and A35, which according to John Ronco, VP of Product Marketing for ARM’s CPU group, it sees as sustaining technologies going forward. But the A73 was designed for 10—28nm processes, and he suggested that a future core could target 16nm processes. While he admitted that each generation of process technology has its challenges, he pointed to work at GlobalFoundries, Samsung, and TSMC as proof that we haven’t reached the end of Moore’s Law. Looking forward, he echoed much of what the processor makers themselves say, and said that “efficiency is the key” to get the kind of user interface customers want.



Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. Miller, who was editor-in-chief of PC Magazine from 1991 to 2005, authors this blog for PCMag.com to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

http://www.pcmag.com/article/352240/the-10nm-processors-of-mwc-2017

Continue Reading