Tag Archives: highend

Grab a pair of the high-end Sennheiser MOMENTUM True Wireless 3 earbuds for 38% OFF their price while you can – PhoneArena

  1. Grab a pair of the high-end Sennheiser MOMENTUM True Wireless 3 earbuds for 38% OFF their price while you can PhoneArena
  2. Amazon shoppers rush to buy ‘real value’ $72 gadget scanning for $19 and say they ‘can’t get enough’… The US Sun
  3. Amazon’s All-new Echo Buds with 20-hr. battery and Alexa now down at $40 shipped 9to5Toys
  4. Get the Beats Flex with Apple’s W1 chip for just under $40! PhoneArena
  5. Amazon shoppers rush to buy ‘fantastic’ $50 must-have gadget for $19.99 as customer praises ‘exceptional pe… The US Sun
  6. View Full Coverage on Google News

Read original article here

US poised for slowdown in high-end munitions deliveries to Ukraine

Defense Secretary Lloyd Austin signaled this week that the U.S. and its Western allies are having trouble keeping pace with Ukraine’s demand for the advanced weaponry it needs to fend off Russia’s invasion. That signal reflects dwindling supplies for Ukraine and fear in the White House of escalation that could lead to war between the U.S. and Russia.

The risk of reduced U.S. stockpiles of high-end munitions has been reported almost since the U.S. began contributing to Ukraine’s defense. Now, nearly eight months since the start of the war, experts interviewed by Fox News Digital say the U.S. is at or very near the end of its capacity to give. 

They agreed that Austin’s remarks indicate that the initial rush of high-end munitions like HIMAR rocket launchers, Javelin anti-tank missiles, anti-aircraft Stingers and M-777 Howitzers is over. These sources said there may be two factors at play that are contributing to this reality.

One factor is the issue that Austin addressed directly this week – the U.S. is running low on equipment that it can hand over to Ukraine.

RUSSIA SCRAMBLES TO REPAIR CRIMEA BRIDGE, ZELENSKYY VOWS TO ACCELERATE ‘VICTORY’

In this photo released by Ukrainian Presidential Press Office, Ukrainian President Volodymyr Zelenskyy leads a meeting of the National Security and Defense Council in Kyiv, Ukraine, Friday, Sept. 30, 2022.
(Ukrainian Presidential Press Office via AP)

At a press conference Wednesday, Austin was asked whether the U.S. and other nations are worried about running so low on domestic supplies of critical munitions that they can no longer help Ukraine. Austin dodged the question by stressing that the desire is there to get Ukraine what it needs, but he left unsaid whether Ukraine’s allies can actually deliver.

“Well, it certainly is not a question of lack of will,” Austin replied.

Austin had just concluded a meeting with officials from dozens of countries about Ukraine’s munitions needs. As he described that meeting, he again talked about willpower but hinted at strained capacity to provide more for Ukraine, which is using up munitions faster than the world can deliver them.

RUSSIA’S WAGNER GROUP MAKES ‘SOME’ ADVANCES IN DONBAS IN FIRST TACTICAL GAINS SINCE JULY: UK INTEL

“We will produce and deliver these highly effective capabilities over the course of the coming months — and in some cases years — even as we continue to meet Ukraine’s most pressing self-defense requirements in real time,” Austin said of the most recent commitment to send HIMARS, vehicles, radar systems and other equipment.

Secretary of Defense Lloyd Austin testifies before a House subcommittee in Washington, D.C., on May 11, 2022.
(AP/Jose Luis Magana)

Mark Cancian is a senior adviser at the Center for Strategic & International Studies who spent seven years working on DOD procurement issues for the Office of Management and Budget. His assessment based on inventory levels, industrial capacity, and information from the Biden administration is that the U.S. has “limited” supplies of HIMARs, Javelins, Stingers and M-777 Howitzers.

“There are some areas where we’re basically at the bottom of the barrel,” he told Fox News Digital.

In some cases, this means the U.S. will likely start meeting Ukraine’s request for weaponry by sending over lower-end substitutions, such as lighter Howitzers that are serviceable but not what Ukraine is after. In other cases, the U.S. may not have much to give – Cancian said that while there is talk of the U.S. providing more air defense equipment, there is not much the U.S. can give in that area.

Cancian said he reads Austin comments as a sign that the days of the U.S. giving Ukraine its best stuff are gone.

NATO HEAD WARNS RUSSIA TO AVOID ‘VERY IMPORTANT LINE’ AHEAD OF NUCLEAR TESTS

“It confirmed what I believe, that we will continue support Ukraine, but we’re going to have to do it in different ways, like providing substitutes, or we might have to buy stuff from other people, or it will take longer,” he said. “That it won’t be quite the same.”

He said this runs the risk of creating what he called a “petting zoo of NATO equipment” in Ukraine – relatively small numbers of many types of equipment that could create compatibility issues.

Some on Capitol Hill are reading Austin’s remark differently that leads to largely the same result – that the Biden administration is purposefully slowing down the transfer of critical munitions to Ukraine, because it is increasingly worried about stumbling into a direct conflict with Russia.

Photo of Ukraine President Volodymyr Zelenskyy and President Biden 
(AP/Office of the President of Ukraine)

A congressional aide with working knowledge of these issues told Fox News Digital that while officials are hinting at limited supplies, there is still room to give more, and that the slowdown is because of a different calculation the Biden administration is making.

“They are afraid of escalation,” this aide said.

Just last week, President Biden openly talked about the “Armageddon” scenario that could unfold if Russia tried to win the war with a tactical nuclear strike. The congressional aide interpreted Austin’s remarks as a sign the administration is more and more worried about crossing a line that might force that outcome.

Another sign of U.S. caution, the aide said, is that the administration allowed nearly $2.8 billion in authority to supply Ukraine with weapons to expire a few weeks ago, at the end of fiscal year 2022. Some on Capitol Hill are reading that as an indication that the administration is finding its own comfort level when it comes to arming Ukraine, and that level stops short of what Congress authorized.

“Congress gave the administration more than it wanted,” the aide said. The Defense Department declined to respond to questions from Fox News Digital about the expiration of this authority.

RUSSIA USING IRANIAN-MADE ‘KAMIKAZE DRONES’ TO STRIKE AROUND KYIV

Destroyed Russian armored vehicles left behind by the Russian forces in Izium, Kharkiv, Ukraine on Oct. 2, 2022.
(Photo by Metin Aktas/Anadolu Agency via Getty Images)

There is a related view within Congress that while U.S. stocks of certain munitions have clearly been reduced as the U.S. sends items to Ukraine, that reduction is not a security threat to the United States itself. The aide explained that many of these items were stockpiled largely for use in a possible conflict with Russia, and that conflict is already playing out with Ukraine in the lead.

That conflict is reducing Russia’s military capacity, which means a corresponding drop in U.S. inventories is not putting the U.S. anywhere near a stockpile crisis.

To put it another way: the Biden administration has more flexibility to give Ukraine more but is choosing not to.

CLICK HERE TO GET THE FOX NEWS APP

The evolving U.S. posture comes just as Ukrainian President Volodymyr Zelenskyy is intensifying pressure on Western nations to provide more weapons. Just this week, Zelenskyy asked for air defense systems that can blunt Russia’s recent missile attacks on Ukraine’s capital.

“The 229th day of full-scale war,” he said. “On the 229th day, they are trying to destroy us and wipe us off the face of the earth.”

Read original article here

AMD Zen 4 Ryzen 9 7950X and Ryzen 5 7600X Review: Retaking The High-End

Back at CES 2022, which was held in Las Vegas earlier at the beginning of the year, AMD announced that its new Zen 4 core would be coming sometime in the second half of 22. During AMD’s ‘together we advance_PCs’ live streamed event at the end of August, AMD unveiled its Ryzen 7000 series of desktop processors, with four SKUs aimed at different product segments. Today AMD has officially launched Ryzen 7000 with the Ryzen 9 7950X sitting as the brand’s representative of performance leadership in an x86 processor for desktops.

On paper, the AMD Ryzen 9 7950X is a 16C/32T behemoth to take overall performance leadership in desktop computing. Their entry point into the market is the Ryzen 5 7600X, which has 6C/12T and harnesses all the benefits of the flagship in a more svelte and affordable chiplet-based package. AMD pins its hopes on bringing that all-important performance crown back to its side with Zen 4 with its new architecture based on TSMC’s 5 nm process; prepare for battle. We’ve detailed what Zen 4 brings to the table regarding the new microarchitecture and tests the new Ryzen 9 7950X and Ryzen 5 7600X through our CPU suite.

New Zen 4 Core on TSMC 5nm, Boost Up to 5.7 GHz!!

The latest Ryzen 7000 series of processors are direct replacements to the Ryzen 5000 series, with a new chipset andell as a newly designed microarchitecture both on the front and back end of the silicon’s design.

As it stands at the time of writing, AMD is launching four processors based on its 5nm Zen 4 core, ranging from a 6C/12T part all the way up to 16C/32T; just like with the previous Ryzen 5000 (Zen 3) and Ryzen 3000 series (Zen 2) launches.

The Ryzen 9 7950X: 16 Cores, 32 Threads, New 170 W TDP: $699

Looking at the specifications of the four AMD Ryzen 7000 processors, the top SKU is the Ryzen 9 7950X, with sixteen Zen 4 cores (two threads per core, 32T) two eight-core core 5nm CCDs. The Ryzen 9 7950X has a base frequency of 4.5 GHz, with a turbo frequency on one core of 5.7 GHz,, which as it stands, is the fastest CPU core in the world for the desktop space today.

AMD has also given the Ryzen 9 7950X a larger 170 W TDP, which when compared to its Ryzen 5000 counterpart, the 5950X, is an increase of 65 W (170W versus 105W) This increase in overall power has allowed AMD to improve on its frequencies, as well as giving its Precision Boost Overdrive overclocking technology more room to breathe; more power typically means more performance.

The Ryzen 9 7900X, Ryzen 7 7700X, and Ryzen 5 7600X

Moving one down the stack is the Ryzen 9 7900X, which is a 12C/24T and 170W TDP part; it has a higher base frequency than the 7950X of 4.7 GHz, but with a slightly lower boost frequency of up to 5.6 GHz.  AMD has launched one Ryzen 7 part designed for mid-range desktop computing, through the Ryzen 7 7700X, which is an 8C/16T SKU, with a boost frequency on a single core of up to 5.4 GHz, with a base frequency of 4.5 GHz.

Focusing on the entry-level segment, its Ryzen 5 7600X looks to capitalize on offering 6C/12T with its previous series maximum TDP o 105W, at a reasonable price point. The Ryzen 5 7600X includes a base frequency of 4.7 GHz, with a modest (compared to Ryzen 9) boost frequency on a single core of 5.3 GHz.

AMD Ryzen 7000 versus Ryzen 5000
AnandTech Cores
Threads
Base
Freq
Turbo
Freq
Memory
Support
L3
Cache
TDP MSRP
Ryzen 9 7950X 16C / 32T 4.5GHz 5.7GHz DDR5-5200 64 MB 170 W $699
Ryzen 9 5950X 16C / 32T 3.4 GHz 4.9 GHz DDR4-3200 64 MB 105 W $799
 
Ryzen 9 7900X 12C / 24T 4.7GHz 5.6GHz DDR5-5200 64 MB 170 W $549
Ryzen 9 5900X 12 C / 24T 3.7 GHz 4.8 GHz DDR4-3200 64 MB 105 W $549
 
Ryzen 7 7700X 8C / 16T 4.5GHz 5.4GHz DDR5-5200 32 MB 105 W $399
Ryzen 7 5800X 8C / 16T 3.8 GHz 4.7 GHz DDR4-3200 32 MB 105 W $449
 
Ryzen 5 7600X 6C / 12T 4.7GHz 5.3GHz DDR5-5200 32 MB 105 W $299
Ryzen 5 5600X 6C / 12T 3.7 GHz 4.6 GHz DDR4-3200 32 MB 65 W $299

Comparing apples to apples, so to speak, from the new Zen 4 generation to the previous Zen 3 generations with like-for-like products, Ryzen 7000 has made some big overall improvements to the chips’ capabilities. Starting at the top tier, the Ryzen 9 7950X has an enormous improvement in base and boost frequencies, which makes Zen 4’s efficiency better than any previous Ryzen generation.

This has been possible in part through superior power efficiency, as the Zen 4 article is largely a Zen 3 refinement, but produced on TSMC’s 5 nm process node (from TSMC 7 nm). This efficiency has allowed AMD to boost clockspeeds without breaking the power bank, with the 105W TDP 7700X seeing a 700MHz improvement for no change in TDP. Coupled with a 13% TDP improvement, and the Ryzen 7000 series chips can deliver some significant single-threaded performance gains. And multi-threaded performance is not left out in the cold, either; by increasing their top TDP to 170W, AMD is able to keep the CPU cores on their 12C and 16C parts at higher sustained turbo clocks, delivering much better performance there as well.

Of course one of the key arguments here is that more power equals more which is true on the part of Ryzen 7000 series. Ryzen 7000’s TJ Max for its Precision Boost Overdrive technology stands at 95°C, which means that the CPU will use all of the available thermal headroom to maximize performance.

Although this can be overridden when manually overclocking, this opens up the maximum TJ Max to 115°C. It’s key to note that users will need to use more premium and aggressive cooling types to squeeze every last drop of performance from Zen 4. The fact that Ryzen 7000 runs hot is accounted for by AMD through their design choices and implementations. As such, they have opted not to bundle their own CPU coolers with the retail packages, instead directing buyers to fairly powerful third-party coolers.

New AM5 Socket: AM4 Coolers will Support AM5 Too

AMD has also transitioned to a new chipset for Ryzen 7000, named AM5. Along with AM5 also comes a new socket, the LGA1718. Now what’s interesting is AMD has specified that most AM4 socketed coolers will support the new LGA1718 socket on AM5; this is great for keeping with compatibility from the previous generation.

This also means that AM4 is now a thing of the past, although it does offer some incredible right now, as well as support with the cheaper DDR4 too. AMD has of course switched to support for DDR5 memory, with JEDEC settings across all four CPUs set at DDR5-5200; an improvement in Intel’s 12th Gen Core series support for DDR5-4800.

AMD has unveiled four new chipsets, two Extreme variants named X670E and B650E, with two regular chipsets, aptly named X670 and B650, original and simple. The top tier X670E series will feature both PCIe 5.0 lanes to the top PEG slot, with support for PCIe 5.0 storage devices which are expected in November 2022. As for its regular X670 chipset, PCIe 5.0 to the PEG slot is optional, not mandatory, like on X670E.

The B650 chipsets are designed to be more affordable and, as such only feature PCIe 4.0 lanes to the PEG slot. They do, however feature at least one PCIe 5.0 x4 storage slot. The B650E is reserved for those lower-end boards that want to include PCIe 5.0 to the graphics card, although users looking to utilize PCIe 5.0 support should opt for,X670E; better boards, better controllers, and better specifications.

New I/O Die: TSMC 6nm For Ryzen 7000

As we’ve seen previously from the Ryzen 5000 series, AMD uses chiplet packaging, with two core complex dies (CCD) on its top SKU, with an I/O die hosting all of the PCIe 5.0, the integrated memory controller (IMC), and new for Ryzen 7000, two CU’s of AMD’s rDNA 2 integrated graphics. Some key advantages of AMD’s new 6 nm TSMC I/O die means more transistors, better efficiency at the manufacturing stage, and ultimately most importantly of all, from an efficiency point of view, lower overall power draw.

It’s time to dive deep into all of AMD’s new improvements and changes for its Zen 4 microarchitecture. Over the following pages we’ll, be going over the following:

  1. Ryzen 7000 Overview: Comparing Ryzen 7000 to Ryzen 5000 specifications
  2. Socket AM5: The New Platform For Consumer AMD
  3. More I/O For AM5: PCIe 5, Additional PCIe Lanes, & More Displays
  4. AM5 Chipsets: X670 and B650, Built by ASMedia
  5. DDR5 & AMD EXPO Memory: Memory Overclocking, AMD’s Way
  6. Ryzen 7000 I/O Die: TSMC & Integrated Graphics at Last
  7. Zen 4 Architecture: Power Efficiency, Performance, & New Instructions
  8. Zen 4 Execution Pipeline: Familiar Pipes With More Caching
  9. Test Bed and Setup
  10. Core-to-Core Latency
  11. SPEC2017 Single-Threaded Results
  12. SPEC2017 Multi-Threaded Results
  13. CPU Benchmark Performance: Power, Web, & Science
  14. CPU Benchmark Performance: Simulation and Encoding
  15. CPU Benchmark Performance: Rendering
  16. CPU Benchmark Performance: Legacy Tests
  17. Gaming Performance: 720p and Lower
  18. Gaming Performance: 1080p
  19. Gaming Performance: 4K
  20. Conclusion

Read original article here

NVIDIA’s Next-Gen GeForce RTX 40 Founders Edition Cooler For High-End GPUs Allegedly Leaks Out

The next-generation NVIDIA GeForce RTX 40 series Founders Edition design has allegedly been leaked by QbitLeaks. The new design shows us a glimpse of the updated Founders Edition GPU cooler that will be featured on the high-end Ada Lovelace GPUs such as the RTX 4090 and 4080.

NVIDIA’s Massive GeForce RTX 40 Founders Edition Cooler Allegedly Leaks Out, Coming To An RTX 4090 & RTX 4080 Graphics Card Soon!

According to the leaker, this new cooler design that has been pictured will be part of the GTC ’22 keynote that is to be held two weeks later. There are no details provided as to which card this is but the updated shroud and fan design do match the RTX 4080 cooler that leaked out a few days ago. It looks like NVIDIA will be reusing its existing cooler design and giving it a slight visual overhaul.

We have seen in previous leaks that the heatsink under the shroud has been updated with a larger thermal contact surface area that covers the GPU, VRAM, & VRMs. The fans have been updated to a 7-blade fan design and also got slightly larger but the overall looks of the shroud & design remain mostly the same as the current Founders Edition graphics cards. This updated design should help deliver much better cooling for the Ada Lovelace GPUs which will be consuming lots of power. The cards will also utilize a PCIe Gen 5.0 connector interface and a Gen 5.0 power interface through the new 16-pin connectors.

NVIDIA GeForce RTX 4090 Founders Edition Graphics Card Cooler / Shroud Leak:

NVIDIA GeForce RTX 4080 Founders Edition Graphics Card Cooler / Shroud Leak:

The leaker states that this picture is a teaser for GTC so we might definitely get to see CEO Jensen talk about his next-gen gaming lineup during the event. GTC has previously been a data center & HPC-specific event but with the recent decline in NVIDIA’s earnings, mostly due to a falling gaming market, the company has repurposed its prime event and dedicated it to gamers.

NVIDIA GeForce RTX 4080 ‘Expected’ Specifications

The NVIDIA GeForce RTX 4080 is expected to utilize a cut-down AD103-300 GPU configuration with 9,728 cores or 76 SMs enabled of the total 84 units whereas the previous configuration offered 80 SMs or 10,240 cores. While the full GPU comes packed with 64 MB of L2 cache and up to 224 ROPs, the RTX 4080 might end up with 48 MB of L2 cache and lower ROPs too due to its cut-down design. The card is expected to be based on the PG136/139-SKU360 PCB.

As for memory specs, the GeForce RTX 4080 is expected to rock 16 GB GDDR6X capacities that are said to be adjusted at 23 Gbps speeds across a 256-bit bus interface. This will provide up to 736 GB/s of bandwidth. This is still a tad bit slower than the 760 GB/s bandwidth offered by the RTX 3080 since it comes with a 320-bit interface but a lowly 10 GB capacity. To compensate for the lower bandwidth, NVIDIA could be integrating a next-gen memory compression suite to make up for the 256-bit interface.

For power, the TBP is now set to be rated at 340W, a 20W increase from the previous 320W spec that we got. This brings the TBP to the same ballpark as the existing RTX 3080 graphics card (up to 350W). Now it is not known whether the other RTX 40 series graphics cards will also be getting the faster GDDR6X memory treatment but we know that Micron has commenced full mass production of up to 24 Gbps GDDR6X memory modules so they have to go somewhere.

  • NVIDIA GeForce RTX 4080 “Expected” TBP – 340W
  • NVIDIA GeForce RTX 3080 “Official” TBP – 350W

NVIDIA GeForce RTX 4080 Series Preliminary Specs:

Graphics Card Name NVIDIA GeForce RTX 4080 Ti NVIDIA GeForce RTX 4080 NVIDIA GeForce RTX 3090 Ti NVIDIA GeForce RTX 3080
GPU Name Ada Lovelace AD102-250? Ada Lovelace AD103-300? Ampere GA102-225 Ampere GA102-200
Process Node TSMC 4N TSMC 4N Samsung 8nm Samsung 8nm
Die Size ~450mm2 ~450mm2 628.4mm2 628.4mm2
Transistors TBD TBD 28 Billion 28 Billion
CUDA Cores 14848 9728? 10240 8704
TMUs / ROPs TBD / 232? TBD / 214? 320 / 112 272 / 96
Tensor / RT Cores TBD / TBD TBD / TBD 320 / 80 272 / 68
Base Clock TBD TBD 1365 MHz 1440 MHz
Boost Clock ~2600 MHz ~2500 MHz 1665 MHz 1710 MHz
FP32 Compute ~55TFLOPs ~50 TFLOPs 34 TFLOPs 30 TFLOPs
RT TFLOPs TBD TBD 67 TFLOPs 58 TFLOPs
Tensor-TOPs TBD TBD 273 TOPs 238 TOPs
Memory Capacity 20 GB GDDR6X 16 GB GDDR6X?
12 GB GDDR6X?
12 GB GDDR6X 10 GB GDDR6X
Memory Bus 320-bit 256-bit?
192-bit?
384-bit 320-bit
Memory Speed 21.0 Gbps? 23.0 Gbps? 19 Gbps 19 Gbps
Bandwidth 840 GB/s 736 GB/s?
552 GB/s?
912 Gbps 760 Gbps
TBP 450W 340W 350W 320W
Price (MSRP / FE) $1199 US? $699 US? $1199 $699 US
Launch (Availability) 2023? July 2022? 3rd June 2021 17th September 2020

NVIDIA GeForce RTX 4090 ‘Expected’ Specifications

The NVIDIA GeForce RTX 4090 will use 128 SMs of the 144 SMs for a total of 16,384 CUDA cores. The GPU will come packed with 96 MB of L2 cache and a total of 384 ROPs which is simply insane. The clock speeds are not confirmed yet but considering that the TSMC 4N process is being used, we are expecting clocks between the 2.0-3.0 GHz range.

As for memory specs, the GeForce RTX 4090 is expected to rock 24 GB GDDR6X capacities that will be clocked at 21 Gbps speeds across a 384-bit bus interface. This will provide up to 1 TB/s of bandwidth. This is the same bandwidth as the existing RTX 3090 Ti graphics card and as far as the power consumption is concerned, the TBP is said to be rated at 450W which means that TGP may end up lower than that. The card will be powered by a single 16-pin connector which delivers up to 600W of power. It is likely that we may get 500W+ custom designs as we saw with the RTX 3090 Ti.

As for its feature set, the NVIDIA GeForce RTX 4090 and RTX 4080 graphics cards will rock all the modern NV feature sets such as the latest 4th Gen Tensor Cores, 3rd gen RT cores, the latest NVENC Encoder, and NVCDEC Decoder, and support for the latest APIs. They will pack all the modern RTX features such as DLSS, Reflex, Broadcast, Resizable-BAR, Freestyle, Ansel, Highlights, Shadowplay, and G-SYNC support too.

  • NVIDIA GeForce RTX 4090 “Expected” TBP – 450W
  • NVIDIA GeForce RTX 3090 “Official” TBP – 350W

NVIDIA GeForce RTX 4090 Ti & RTX 4090 ‘Preliminary’ Specs:

Graphics Card Name NVIDIA GeForce RTX 4090 Ti NVIDIA GeForce RTX 4090 NVIDIA GeForce RTX 3090 Ti NVIDIA GeForce RTX 3090
GPU Name Ada Lovelace AD102-350? Ada Lovelace AD102-300? Ampere GA102-350 Ampere GA102-300
Process Node TSMC 4N TSMC 4N Samsung 8nm Samsung 8nm
Die Size ~600mm2 ~600mm2 628.4mm2 628.4mm2
Transistors TBD TBD 28 Billion 28 Billion
CUDA Cores 18432 16128 10752 10496
TMUs / ROPs TBD / 384 TBD / 384 336 / 112 328 / 112
Tensor / RT Cores TBD / TBD TBD / TBD 336 / 84 328 / 82
Base Clock TBD TBD 1560 MHz 1400 MHz
Boost Clock ~2800 MHz ~2600 MHz 1860 MHz 1700 MHz
FP32 Compute ~103 TFLOPs ~90 TFLOPs 40 TFLOPs 36 TFLOPs
RT TFLOPs TBD TBD 74 TFLOPs 69 TFLOPs
Tensor-TOPs TBD TBD 320 TOPs 285 TOPs
Memory Capacity 24 GB GDDR6X 24 GB GDDR6X 24 GB GDDR6X 24 GB GDDR6X
Memory Bus 384-bit 384-bit 384-bit 384-bit
Memory Speed 24.0 Gbps 21.0 Gbps 21.0 Gbps 19.5 Gbps
Bandwidth 1152 GB/s 1008 GB/s 1008 GB/s 936 Gbps
TGP 600W 450W 450W 350W
Price (MSRP / FE) $1999 US? $1499 US? $1999 US $1499 US
Launch (Availability) 2023? October 2022? 29th March 2022 24th September 2020

The NVIDIA GeForce RTX 40 series graphics cards including the RTX 4080 and RTX 4070 graphics cards will be amongst the first graphics cards besides the RTX 4090 to launch to gamers. The RTX 4090 is so far expected to launch on October 22 but an unveiling is expected at NVIDIA’s GTC keynote later this month.

Which NVIDIA GeForce RTX 40 series graphics card are you looking forward to the most?Poll Options are limited because JavaScript is disabled in your browser.

Products mentioned in this post



Read original article here

Sony enters the high-end custom controller arena with the DualSense Edge

As long rumored, Sony announced details about its plans to enter the high-end controller game with the DualSense Edge, an advanced cousin to the PlayStation 5’s DualSense controller with added customization options and features.

If you’ve seen or used Microsoft’s Xbox Elite Series 2 or controllers from SCUF or others in that market, you’ll see a lot that’s familiar here.

The DualSense Edge includes all the same features seen in the DualSense, like haptic feedback, but it adds the ability to customize button mapping, stick sensitivity, trigger travel distance, and dead zones in multiple swappable control profiles.

It also offers changeable stick caps, and it supports back buttons that allow you to perform the same inputs you would with the (for example) triangle, cross, square, and circle face buttons without taking your thumb off the right stick. That’s particularly helpful for serious first-person shooter players, as it allows you to initiate jumps or slides in many games’ default control schemes while maintaining control of your aim.

The controller ships with multiple options for the stick caps and back buttons. That includes either half-dome or lever shapes for the back buttons and either standard, high dome, or low dome stick caps.

The stick modules can also be replaced to extend the life of the controller, Sony says, but those replacement stick modules will be sold separately—not included in the box like the caps or back buttons.

A “dedicated Fn button” will allow players to swap between pre-set control configurations, adjust game and mic relative volume, and head to the PS5’s controller profile settings menu directly from the controller itself.

The DualSense Edge will come with a case to store all those extras in, as well as a USB-C cable that locks onto the controller to prevent accidental unplugging.

DualSense Edge announcement video.

Sony hasn’t named a release date or price for this controller, but judging from other competing high-end controllers like Microsoft’s Xbox Elite Series 2 or SCUF’s Reflex for PS5, it’s not likely to be cheap. Those controllers typically range anywhere from around $180 to more than $200. Sony’s base controller for the PS5, the DualSense, is already quite pricey, with an MSRP of $70.

For the most part, the DualSense Edge seems almost a feature-to-feature counter to Microsoft’s Xbox Elite Series 2. That said, Microsoft’s controller also emphasizes nicer materials and in-hand feel, and Sony’s announcement hasn’t said anything about how the DualSense Edge will (or won’t) differ from the DualSense in that regard.

PlayStation VR2 gets a release window

Sony also finally named a release window for the PlayStation VR2, the PlayStation 5-exclusive follow-up to the highly successful 2016 PlayStation VR headset for the PlayStation 4.

In a tweet and an Instagram post, Sony announced that the headset is “coming early 2023” and accompanied the social posts with an image depicting the headset and its two controllers—though we’ve seen both before.

Enlarge / The image of the PlayStation VR2 headset and controllers that Sony shared on social media when announcing the release window.

Sony

As we’ve learned previously, the headset will have a 110-degree field of view, 2,000 x 2,040 per-eye resolution at either 90 Hz or 120 Hz, and it will feature HDR OLED displays for each eye.

Perhaps most importantly, it will support eye-tracking that enables foveated rendering, a technique that allows VR headsets to focus their horsepower on the pixels that are clearest at the center of your view while allowing for natural blurriness in your peripheral vision.

And it will connect to the PS5 with a single USB-C cable with no external camera required—a far cry from the wire-laden unwieldy connections seen in the original PSVR.

Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Listing image by Sony



Read original article here

Apple Silicon Face-Off: M1 Ultra and M1 Max take on high-end PC juggernauts

It’s high time Digital Foundry took a look at Apple Silicon and today we’re going to be looking at the higher-end chips in the line-up. Our focus is on the monster that is the M1 Ultra, found within the latest Apple Mac Studio, but we’re also going to be checking out the MacBook Pro’s M1 Max. It’s the M1 Ultra that truly commands our attention though: this system-on-chip represents the highest end computer processor Apple has designed to date, with the firm claiming it should be as fast as a high-end Windows desktop. Packing 20 CPU cores, a 21 teraflop GPU, and 800GB/s of memory bandwidth, it certainly seems like it could be – but how does it measure up in real-world testing and how well does it game?

The M1 and M2 lines are the culmination of a long journey that has seen the firm transition away from SoCs based on third party design and IP, moving all chip design in-house to the furthest extent realistically possible. Apple designs its own GPUs, its own CPUs, and handles SoC design and integration. This results in tremendous control over processor design – the kind of control you would need to scale a phone processor up for high-end desktops.

Which brings us to the M1 Ultra. Since 2020, Apple has been moving its Mac desktops and notebooks away from Intel CPUs and AMD GPUs and over to its in-house SOCs, taking the same fundamental tech from iPhones and integrating it into computers. Apple started with lower-end and lower-power form factors, but finally came around to high-end desktops with the release of the Ultra a few months ago. The M1 Ultra isn’t really its own unique chip, however. It’s actually two M1 Max SoCs connected over a high-bandwidth 2.5TB/s interposer. To the operating system and the user it seems like one monolithic chip with 1 CPU and GPU, but in reality this is two chips linked through a first-of-its-kind interconnect with the performance to support a dual-chip GPU and CPU.

Digital Foundry’s video analysis of the M1 Ultra and M1 Max processors, stacked up against powerful PC equivalents including the Core i9 12900K and RTX 3090.

The Ultra packs a whopping 20 CPU cores, split between 16 performance cores and 4 efficiency cores in a configuration similar to modern Intel designs. While the clockspeeds may be lower than desktop PC CPUs, instructions-per-clock are higher on the performance cores, leading to similar overall performance-per-core. In its highest-end spec, the 21 teraflop GPU features 64 of Apple’s in-house graphics cores, with performance similar to an RTX 3090 according to Apple, though we’ll touch on this later. To round things out, the system packs a stunning 800GB/s of memory bandwidth to keep those GPU and CPU cores well-fed.

M1 Ultra is only currently available in the Mac Studio desktop computer, which we tested in its maxed-out configuration, with 128GB of memory and an 8TB SSD. Most interestingly, this computer has a volume of just 3.7 litres, which is truly tiny and only slightly larger than an Xbox Series S. It uses two blower-style fans that pull air through a large copper heatsink to dissipate the roughly 200W that the system pulls at load, which is a small fraction of the energy used by a high-end desktop PC.

So let’s move on and actually measure how fast this machine is. We’re going to start off with gaming tests before closing with productivity benchmarks and synthetics. Is this machine truly as fast as a high end desktop PC – or possibly even faster? Let’s take a look at our gaming benchmarks, calculated via video capture as is the Digital Foundry way. While internal benchmarks are largely accurate these days, our philosophy is that the only frames that matter are the frames that actually make it to the video output of the hardware.

M1 Max (MBP 2021) M1 Ultra (Mac Studio 2022) RTX 3080M 150W (MSI GP66 Laptop) RTX 3090 (Desktop PC)
Shadow of the Tomb Raider 31.0 49.0 29.0 65.0
Metro Exodus 27.9 34.8 30.3 71.6
Total Warhammer 3 14.9 25.4 25.3 47.4
World of Warcraft 18.4 36.2 32.9 81.6
Wildlife Extreme Bench 20215 35498 24247 42451

It’s not a particularly large table because, unfortunately, there aren’t many high-end Mac games that we can actually test, particularly when it comes to big-budget games. But we do have a few titles here – and the results are intriguing. For our gaming tests, we’ve got a 16 inch Macbook Pro with the fully-enabled M1 Max chip, our maxed-out Mac Studio, an MSI GP66 gaming laptop, with an eleventh-gen i9 and a 150W RTX 3080 mobile processor, and a high-end desktop PC with a Core i9 12900K paired with the mighty RTX 3090.

Looking at Shadow of the Tomb Raider. This isn’t a native Apple Silicon game, as the title was written for x86, so the M1 chips here have to use the Rosetta 2 translation layer to function – but it doesn’t really seem like that has much of an impact on performance. The benchmark sequence running at max settings at 4K shows the 3080M and M1 Max are neck-and-neck, while the Ultra falls squarely between the M1 Max and the 3090. The Ultra has solid performance and reasonable scaling from the Max, but isn’t quite holding the line against ultra high-end GPUs.

Metro Exodus – the original non-RT version – has a decent Mac port, although again it was written for x86. The Ultra splits the PCs here as well, while the Max does a good job of fending off the 3080M. On the flip side, there seem to be very serious problems with frame-times and stuttering when vsync is disabled for on Macs for some reason, which I noticed across these tests. Total War: Warhammer 3 is another x86 game, but it doesn’t seem to hold up quite as well as Metro or Tomb Raider. M1 Ultra is far behind the 3090 here and barely keeps pace with a high-end gaming laptop. Perhaps this can be chalked up to a sub-optimal port, or problems with the Rosetta translation.

Apple Silicon games and benchmarks are hard to find, but 3DMark’s Wildlife Extreme is indeed a native application.

But what about native Apple Silicon games? There are remarkably few games for Apple Silicon, and most of them are iOS ports, not conventional PC software. There is one prominent game that we can test across platforms though – World of Warcraft. This is a full-bore Apple Silicon version of Blizzard’s long-running MMO, but despite running natively, the same pattern emerges with the M1 Ultra yet again falling squarely between the two PC systems, falling well short of the 3090 but still delivering performance in line with a high-end PC GPU. The Max is borderline unplayable while the 3080M hovers around 30fps. All of these systems would be perfectly fine with the game at remotely reasonable settings, of course – we are running the game essentially maxed out at a whopping 8K internal resolution to create a proper stress test.

There’s one cross-platform game graphics benchmark that runs natively on Apple Silicon as well – 3DMark Wildlife Extreme, which renders a set of relatively simple 3D scenes at 4K. Here, the Ultra falls somewhat short of the 3090 but comes in a solid 76 percent faster than the Max. Ultimately, the Ultra seems to sit somewhere below the 3090 in graphics performance, at least as far as we can tell from benchmarking across operating systems. It’s still a powerful processor though and seems to slot in at roughly the 3070 or 3080 level depending on workload.

Scaling from the M1 Max is reasonable, but not perfect. Typically, you should expect a 60-70 percent performance improvement over the single-chip option. Perhaps the interposer is causing some minor hiccups here, as using multiple chips for one GPU requires a massive amount of bandwidth.

These results are really just for evaluating raw performance though, as the Mac is not a good gaming platform. Very few games actually end up on Mac and the ports are often low quality. If there is a future for Mac gaming it will probably be defined by “borrowing” games from other platforms, either through wrappers like Wine or through running iOS titles natively, which M1-based Macs are capable of. In the past, Macs could run games by installing Windows through Apple’s Bootcamp solution, but M1-based chips can’t boot natively into any flavour of Windows, not even Windows for ARM.

Blender (CPU Samples Per Min) M1 Max (MBP 2021) M1 Ultra (Mac Studio 2022) Core i9 10850K (Desktop PC) Core i9 12900K (Desktop PC)
Monster 99.4 195.9 88.7 178.1
Junkshop 53.8 107.33 50.7 101.1
Classroom 43.3 84.4 37.8 82.3
Geekbench CPU M1 Max (MBP 2021) M1 Ultra (Mac Studio 2022) Core i9 10850K (Desktop PC) Core i9 12900K (Desktop PC)
Multi-Core 12577 23580 9599 17446
Single-Core 1774 1784 1285 1820
Cinebench CPU M1 Max (MBP 2021) M1 Ultra (Mac Studio 2022) Core i9 10850K (Desktop PC) Core i9 12900K (Desktop PC)
Multi-Core 12259 23908 11171 25160
Single-Core 1528 1531 1095 1858
Handbrake 4K60 Encode M1 Max (MBP 2021) M1 Ultra (Mac Studio 2022) Core i9 10850K (Desktop PC) Core i9 12900K (Desktop PC)
Time (mins:secs) 7:10 4:08 5:43 2:44

As you’ve likely realised from the table above, I also spent some time benchmarking the CPU in the M1 Ultra. I tested Blender, Geekbench, Cinebench and Handbrake – and the Ultra’s results are compelling. We’ve swapped the GP66 for my desktop computer here, which packs a Core i9 10850K. Think of this as Core i9 10900K with a barely perceptible clock-speed reduction. Across these tests, the 12900K and M1 Ultra prove very comparable. The two chips are essentially a match with respect to multicore performance, though the ultra-high frequencies the 12900K is capable of can give it the edge in some single-threaded tests. The 10850K and M1 Max are closely matched as well.

The scaling from M1 Max to M1 Ultra is close-to-linear across these runs, unlike our graphics benchmarks. On average, M1 Ultra is 88 percent faster, with some results approaching 100%. Linking up two clusters of cores across an inter-chip medium is something we’ve seen in the PC space for years now and very good scaling is to be expected here.

Finally, I thought I’d throw in some real-world benchmarks from a couple of programs I frequently use – Final Cut Pro and Topaz Video Enhance AI. We’re looking at the two M1 computers here, as well as a 16 inch 2019 MacBook Pro with an eight-core Intel CPU and an AMD RDNA 1-based GPU. The results are very curious in Final Cut. While both M1 machines trounce the Intel-based MacBook, export times are virtually identical across the M1s. So what’s going on?

With typical Final Cut workloads on M1 chips, export performance seems to be dictated by the hardware video encoders. The M1 Ultra has the same video hardware encoders as the Max, so there’s no meaningful performance difference when encoding a ProRes or h.264 video without many effects. To actually see a difference in export times, you’d need to really stress the GPU with lots of effects and Motion templates. Even then it would be hard to see a large difference. That isn’t to say that there aren’t big moment-to-moment performance differences, though – Final Cut generates video thumbnails in real-time on the CPU cores, which occurs nearly instantly on an M1 Ultra and is significantly slower on M1 Max. In general, the timeline is more responsive and the editing process is more fluid – but that won’t be reflected in simple export tests.

M1 Max (MBP 2021) M1 Ultra (Mac Studio 2022) Core i9 9980HK, Radeon Pro 5500M (MBP 2019)
Final Cut h.264 Export (mins:secs) 1:05 1:03 1:33
Final Cut ProRes Export (mins:secs) 0:23 0:23 1:25
Topaz Video Enhance (1080p to 4K Upscale, Artemis High) 6:01 4:12 13:33

Topaz AI is much more straightforward. We’re strictly GPU-bound here and the M1 Ultra shows a solid performance improvement – completing the test 43% faster – though not particularly impressive given the doubling of GPU hardware. Both machines crush the 2019 MacBook Pro, as expected.

So, the M1 Ultra packs similar performance to the highest-end PC chips, trading blows across a variety of metrics. CPU performance is up there with the best Intel has to offer, while the GPU sits one or two rungs beneath the PC performance leaders at the moment. The key metric with M1 Ultra isn’t raw performance, however, though it is largely competitive with PCs on that front. It’s power consumption. The Ultra manages to pull even with fast consumer desktops while consuming one quarter to one third of the power consumption. The Mac Studio itself only pulls about 200W when fully loaded, and usually draws much less.

So, why is the M1 Ultra so much more efficient than comparable PC designs? Firstly, Apple has a considerable process node advantage over its competitors. By leveraging TSMC’s 5nm process, Apple is one or two silicon fabrication nodes ahead of its nearest rivals at the moment, which means higher density and lower power consumption for Apple’s chips. Apple generally gets access to TSMC’s newest processes before its PC competitors and has been producing chips at 5nm for over two years at this point.


To see this content please enable targeting cookies.

Secondly, Apple is simply throwing way more silicon at the problem. The M1 Ultra uses a whopping 114 billion transistors across two chips; in contrast, the GA102 GPU in the RTX 3090 packs just 28 billion transistors. With so much more logic, Apple can run its chips at lower clocks and lower voltages and still achieve similar performance. The extremely high density of TSMC 5nm helps a lot here. Lastly, Apple’s CPU and GPU architectures play a significant role here. These are designs that are primarily designed for iPhones and other low-power applications. There are likely many mechanisms inside the chip to keep energy consumption in check, including very effective power gating.

Given the immense potential of the Apple solution, there’s one final question that’s worth addressing: would a move to ARM be practical for the broader PC market as well? After all, Apple achieved an enormous performance improvement when they moved to ARM, so could this be a good solution for PC vendors too?

Generally the answer is no, at least not at the moment. There are two major problems here. The things that make Apple’s designs effective aren’t specific to the ARM instruction set license they use. These are mostly factors we’ve discussed already – its unique high-performance architectures and process node advantage being the most important. Critically, no-one else is currently offering an ARM CPU core design capable of going toe-to-toe with AMD and Intel. The second problem is the lack of an effective translation layer for x86 code. MacOS has Rosetta 2, which is a relatively efficient and broadly compatible solution for running x86 code seamlessly on ARM-based Macs. Windows 11 for ARM has a software emulator for x86 programs, but performance is degraded and compatibility is lacking.

Apple’s die-shots of its M1 silicon line-up may or may not be accurate, but essentially, the GPU is M1 Max is twice the size of M1 Pro, which is in turn twice the size of M1’s. The M1 Ultra effective stacks up two M1 Max chips, meaning a potential doubling in both CPU and GPU resources.

The M1 Ultra is an extremely impressive processor. It delivers CPU and GPU performance in line with high-end PCs, packs a first-of-its-kind silicon interposer, consumes very little power, and fits into a truly tiny chassis. There’s simply nothing else like it. For users already in the Mac ecosystem, this is a great buy if you have demanding workflows. While the Mac Studio is expensive, it is less costly than Apple’s old Pro-branded desktops – the Mac Pro and iMac Pro – which packed expensive Xeon processors and ECC RAM. Final Cut, Photoshop, Apple Motion, Handbrake – pretty much everything I use on a daily basis runs very nicely on this machine.

For PC users, however, I don’t think this particular Apple system should be particularly tempting. While CPU performance is in line with the best from Intel and AMD, GPU performance is somewhat less compelling. Plus, new CPUs and GPUs are incoming in the next few months that should cement the performance advantage of top-end PC systems. That said, the M1 Ultra is a one-of-a-kind solution. You won’t find this kind of raw performance in a computer this small anywhere else.

Gaming on Mac has historically been quite problematic and that remains the case right now – native ports are thin on the ground and when older titles such as No Man’s Sky and Resident Evil Village are mooted for conversion, it’s much more of a big deal than it really should be. Perhaps it’s the expense of Apple hardware, perhaps it’s the size of the addressable audience or maybe gaming isn’t a primary use-case for these machines, but there’s still the sense that outside of the mobile space (where it is dominant), gaming isn’t where it should be – Steam Deck has shown that compatibility layers can work and ultimately, perhaps that’s the route forward. Still, M1 Max and especially M1 Ultra are certainly very capable hardware and it’ll be fascinating to see how gaming evolves on the Apple platform going forward.

fbq('init', '560747571485047'); fbq('init', '738979179819818');

fbq('track', 'PageView'); window.facebookPixelsDone = true;

window.dispatchEvent(new Event('BrockmanFacebookPixelsEnabled')); }

window.addEventListener('BrockmanTargetingCookiesAllowed', appendFacebookPixels);

Read original article here

Samsung’s New $1,800 Foldable Galaxy Phone Tests High-End Budgets

NEW YORK—The entire smartphone industry is slumping except for the priciest devices.

Samsung

Electronics Co. is testing the limits of that high-end demand.

On Wednesday, Samsung unveiled its latest models of two of the world’s most-expensive phones. The Galaxy Z Fold 4, which becomes the size of a small tablet when opened, will cost about $1,800. The more compact Galaxy Z Flip 4 will go for around $1,000. The phones have prices similar to last year’s versions and become available in the U.S. later this month.

Total smartphone shipments slid 8% in the first half of this year versus the same period in 2021, largely because consumers have cut back spending on nonessential goods amid inflation and a shakier economic outlook, according to Counterpoint Research, a research firm. The declines were steepest for the lowest-priced devices, it said.

Foxconn Technology Group, the world’s biggest iPhone assembler, on Wednesday said demand for smartphones and other consumer electronics is slowing, prompting it to be cautious about the current quarter.

Shipments of “ultra-premium” phones—devices sold for $900 or more—grew by more than 20% during the same period, Counterpoint said. This category comprises mostly

Apple Inc.’s

iPhones and Samsung’s flagship devices.

WSJ’s Dalvin Brown checks out the newest foldable smartphones from Samsung to see if the kinks in early models have been ironed out and whether folding is a feature worth spending for, or just a gimmick. Illustration: Adele Morgan

The resilience of the phone industry’s upper class mirrors that of the luxury-goods business, as wealthier consumers show a willingness to keep spending on clothing, handbags and jewelry despite economic rockiness. Brands including

LVMH Moët Hennessy Louis Vuitton SE,

Ralph Lauren Corp.

and Gucci owner

Kering SA

have reported robust growth this year.

Apple, in its most recent quarter, reported a surprise rise in iPhone sales, defying analysts’ expectations for a decline. There has been no obvious macroeconomic impact on iPhone sales in recent months, Apple Chief Executive

Tim Cook

said on an earnings call last month.

Samsung, the world’s largest smartphone maker, recently said it expects the overall smartphone market to see shipments stay flat or experience minimal growth this year. But the South Korean company expressed optimism that its foldable-display devices, which are among its most expensive products, would sell well.

Demand for iPhones and Samsung’s flagship devices, boosted in recent years by the arrival of superfast 5G connectivity and pandemic-time splurging on gadgets, should remain high, said Tom Kang, a Seoul-based analyst for Counterpoint. “It’s clear that the affluent consumers are not affected by current economic headwinds,” Mr. Kang said.

Samsung has much riding on the Galaxy Z Fold 4, left, and the Galaxy Z Flip 4 becoming a success.



Photo:

SAMSUNG

The smaller of the two new devices, the Galaxy Z Flip 4, is an update of the model that accounted for most of Samsung’s foldable-phone sales last year. When fully open on its vertical axis, it has a display that measures 6.7 inches. When closed, it is half the size of most mainstream smartphones, and owners can view text messages and other alerts on a smaller, exterior screen. Compared with last year’s version, Samsung said the Galaxy Z Flip 4 takes better photos and has a slimmer hinge and larger battery.

SHARE YOUR THOUGHTS

Would you buy one of Samsung’s new foldable phones? Why or why not? Join the conversation below.

The heftier Galaxy Z Fold 4 sports a tablet-sized display that is 7.6 inches diagonally when fully opened. It opens and closes like a book, and when shut, it has a 6.2-inch outer screen that performs most smartphone functions. The new version has a slightly thinner hinge and improved camera capabilities, Samsung said.

The Galaxy Z Fold 4 is the first device to use Android 12L, a version of the operating system created by

Alphabet Inc.’s

Google specifically for tablets and foldable phones, Samsung said.

Alongside the two foldable phones, Samsung on Wednesday also introduced two new versions of its Galaxy Watch 5, as well as a new edition of its Galaxy Buds wireless earphones, the Galaxy Buds 2 Pro.

Samsung has much riding on the Galaxy Z Fold 4 and the Galaxy Z Flip 4 becoming a success. Given their high price and fatter margins, foldable devices could represent about 60% of Samsung’s mobile-division operating profits, despite accounting for roughly one-sixth of the company’s smartphone shipments, said Sanjeev Rana, a Seoul-based analyst at brokerage CLSA.

Samsung said the Galaxy Z Fold 4 is the first device to use Android 12L, a version of the operating system created by Google specifically for tablets and foldable phones.



Photo:

SAMSUNG

Across the industry, the priciest tier of smartphones represent about 10% of annual shipments but about 70% of the industry’s profits, Counterpoint said.

Samsung was a pioneer in an industry that had gone stale when it released the first mainstream foldable smartphone more than three years ago. But the original Galaxy Fold stumbled out of the gate. Design flaws delayed its release. The pandemic closed stores, cutting off opportunities for would-be early adopters to test out the devices, Samsung executives have said. And many consumers balked at an initial price tag close to $2,000.

Last year, Samsung’s Galaxy Z Fold 3 and Galaxy Z Flip 3 saw stronger sales, helped by price cuts. The company also juiced demand through aggressive promotions and trade-in discounts that made purchases more affordable.

Worldwide foldable smartphone shipments are expected to total nearly 16 million units this year, up roughly 73% from the prior year, Counterpoint said. Samsung is projected to account for roughly 80% of the foldable market this year, according to Counterpoint.

The other foldable players—selling at prices below the ultra-premium threshold—include major Chinese brands, including Huawei Technologies Co., Xiaomi Corp., as well as BBK Electronics Co.-owned Vivo and Oppo.

Lenovo Group Ltd.

’s

Motorola,

which first launched a foldable phone in 2019, is slated to introduce a new model this month.

Write to Jiyoung Sohn at jiyoung.sohn@wsj.com

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Read original article here

AMD’s High-End X670E Motherboards From ASUS, MSI, Gigabyte, ASRock & Biostar Detailed

AMD’s motherboard partners such as ASUS, MSI, ASRock, Gigabyte & Biostar have unveiled more details about their top X670E designs for Ryzen 7000 Desktop CPUs.

ASUS, MSI, Gigabyte, ASRock & Biostar Showcase Their High-End AMD X670E Motherboards

The AMD Ryzen 7000 CPUs will be migrating to a new home known as AM5, the successor to the long-lasting AM4 platform. It marks a fresh start for the Ryzen Desktop family and as such, existing Ryzen CPUs starting with Ryzen 1000 & all the way up to Ryzen 5000 won’t be supported by the new platform we will tell you why it is so.

The AM5 platform will first and foremost feature the brand new LGA 1718 socket. That’s correct, AMD isn’t going the PGA (Pin Grid Array) route anymore and now focusing on LGA (Land Grid Array), similar to what Intel uses on its existing desktop processors. The main reason to go LGA is due to the addition of enhanced and next-gen features such as PCIe Gen 5, DDR5, etc that we will get to see on the AM5 platform. The socket has a single latch & gone are the days of worrying about pins underneath your precious processors.

Representatives of each motherboard manufacturer joined AMD’s latest “Meet The Experts” live-stream to talk about their next-gen X670E designs but it looks like we may still be missing some details regarding overclocking and memory support which is something that AMD might not like being talked about right now although the full announcement of the product lineup is just a few weeks away on 29th August with a launch planned for 15th of August. So let’s look into what the new high-end motherboard offerings have to offer.

ASUS X670E Motherboards

ASUS kicked it off by unveiling its high-end ROG Crosshair X670E Extreme and ROG Crosshair X670E HERO motherboards. The ROG Crosshair motherboards come with 20+2 phases for the Extreme & 18+2 phases for the HERO models. Both models are designed with some insane 110A Power Stages and in a Teamed design. The VCore PWM controller is an Infineon ASP2205 while the Power Stages are based on Vishay’s SIC850.

  • ROG Crosshair X670E Extreme – 20+2 Phase (110A)
  • ROG Crosshair X670E HERO – 18+2 Phase (110A)

ASUS specifically states that the high-end power delivery is a necessity when overclocking the CPU as it leads to massive current swings and power demand increasing exponentially. Some features highlighted include WiFi 6E (AX210), 10 GbE Marvell AQC113CS connectors, Gen 5.0 PCIe x16 & M.2, USB 4 and Quick Charge 4+ ports.

MSI X670E Motherboards

MSI will be rolling out four brand new X670E Motherboards within its MEG, MPG, and PRO lineups. We recently revealed their flagship MEG X670E GODLIKE motherboards and the manufacturer has confirmed the specs and PCB we reported. The VRM configuration for MSI’s X670E motherboard is as listed below:

  • MEG X670E GODLIKE – 24 (105A) + 2 + 1
  • MEG X670E ACE – 22 (90A) + 2 +1
  • MPG X670E Carbon – 18 (90A) + 2 +1
  • PRO X670E-P WiFI – 14 (80A) + 2 +1

MSI is pushing things to the limits with high-end heatsink designs such as screwless M.2 Shield Frozr technology, M.2 XPANDER-Z Gen 5 Dual AIC (supporting up to two PCIe Gen 5.0 x4) SSDs in actively cooled solution), 60W USB Type-C power delivery and more robust power delivery for each tier of the motherboard. We also get a better look at the MEG X670E GODLIKE which looks as beast as ever with its massively saturated PCB design and tons of IO to work with. More details on MSI’s lineup here.

Gigabyte X670E Motherboards

The lineup that Gigabyte unveiled includes four AORUS motherboards which include X670E AORUS Xtreme, AORUS Master, AORUS Pro AX & AORUS Elite AX. The Xtreme is expected to break some OC records on AMD Ryzen 7000 CPUs.

  • X670E AORUS Xtreme – 18 Phase (SPS 105A) Renesas RAA229628
  • X670E AORUS Master – 16 Phase (SPS 105A) Renesas RAA229620
  • X670E AORUS Pro AX – 16 Phase (SPS 90A) Infineon XDPE192C3
  • X670E AORUS Elite AX – 16 Phase (SPS 70A) Infineon XDPE192C3

We already covered these motherboards including the AERO model previously over here along with their prices.

ASRock X670E Motherboards

ASRock is showcasing five X670E motherboards for the AMD Ryzen 7000 Desktop CPUs. These include the X670E Taichi Carrara, X670E Taichi, X670E Steel Legend, X670E PRO RS & X670E PG Lightning. All five motherboards feature full compatibility with the next-gen AMD Zen 4 CPUs along with DDR5 memory and PCIe Gen 5.0.

The company highlighted some of the main features being USB Type-C with fast charging, an 8-layer PCB design, PCIe 5.0, and M.2 fan heatsink design plus DDR5 with protection circuits. The lineup is also detailed by us here.

Biostar X670E Motherboards

Biostar also talked a bit about their flagship X670E VALKYRIE motherboard which features a 22-phase VRM design and a very solid-looking design that comes with DR.MOS and Digital PWM ICs. The motherboard is a very premium product designed to support the highest-end AMD Zen 4 CPUs.

Will There Be mATX & Mini-ITX AM5 Motherboards?

Answering a question raised by viewers on whether we will see mATX and Mini-ITX designs within the AM5 family, ASRock’s Mike Yang stated that there are certain obstacles that they are working on such as thermal heat dissipation on such a small form factor but once a breakthrough is made, they certainly plan on offering smaller board designs for every chipset of the AMD 600-series line.

Do 2280 M.2 SSDs Fit On The New 2510 M.2 Slots?

MSI’s Michiel Berkhout stated that the current 2280 M.2 form factor is fully compatible with the 2510 M.2 slots featured on their motherboards.

Will Gigabyte Have A Tachyon Motherboard For AM5?

Gigabyte’s Sofos Oikonomou has stated that there will indeed be a Tachyon motherboard based on the AM5 socket but it will be based on a different chipset, not the X670 so we are likely looking at a B650(E) product.

It is always great to hear more information directly from motherboard manufacturers but key details such as AMD EXPO DDR5 memory and overclocking support are still missing. It looks like we now have to wait till the reviews which don’t come out until the 13th of September to get more data on those but we will try to provide you with more information on those in the coming weeks.

Which motherboard manufacturer do you think has the best X670E design?Poll Options are limited because JavaScript is disabled in your browser.



Read original article here

AMD Confirms Ryzen 7000 “Raphael” CPU Launch This Quarter, High-End RDNA 3 GPUs & EPYC Genoa On Track For Late 2022

During the earnings call for its record Q2 2022 financials, AMD’s CEO, Lisa Su confirmed the launch of Ryzen 7000 CPUs, RDNA 3 GPUs, and EPYC Genoa chips in the coming months of 2022.

AMD Confirms Ryzen 7000 With Zen 4 Cores For Q3 2022, High-End RDNA 3 GPUs & EPYC Genoa CPUs Coming Later This Year

AMD posted a record quarter just a few hours ago with a 70% increase in revenue year over year with the Data Center Revenue alone climbing to $1.5 Billion in Q2 2022.

AMD Ryzen 7000 “Raphael” CPUs Launching This Quarter

So first of all, let’s get the big fish out of the way. AMD’s CEO, Lisa Su, confirmed that the red team will be launching its Ryzen 7000 Desktop CPUs, codenamed Raphael, and based on the Zen 4 core architecture to store shelves this quarter. While the exact date hasn’t been mentioned, it looks like the leaked September launch might be becoming a reality. The launch will not only include the CPU lineup but will also come with brand new 600-series motherboards such as the X670E & X670 that are supposed to be part of the first wave along with four chips that are presumably going to make up the initial “X” series lineup.

Looking ahead, we are on track to launch our all-new 5-nanometer Ryzen 7000 desktop processors and AM5 platforms later this quarter, with leadership performance in gaming and content creation.

Lisa Su, AMD CEO (Q2 2022 Earnings Call)

AMD Radeon RX 7000 “RDNA 3” GPUs Launching Late 2022

AMD also reaffirmed that the company would be launching its “High-End” RDNA 3 GPUs later this year. This looks similar to the Zen 3 and RDNA 2 launch which were launched just months apart. It looks like we may get a teaser of the RDNA 3 Radeon RX 7000 GPUs during the Ryzen 7000 launch but the official launch would take place either in October or November. AMD focusing on the high-end first means that they will have their top-end solution compete directly against NVIDIA’s high-end Ada Lovelace graphics cards.

While we expect the gaming graphics market to be down in the third quarter, we remain focused on executing our GPU roadmap, including launching our high-end RDNA three GPUs later this year.

Lisa Su, AMD CEO (Q2 2022 Earnings Call)

AMD EPYC 9000 “Genoa” CPUs On-Track For 2022 Launch

Lastly, we have AMD confirming that their EPYC 9000 “Genoa” CPUs are on track for launch by the end of this year. The company is seeing huge demand for Genoa already and is also working to get Bergamo “Zen 4C” out by early next year along with the 3D V-cache boosted Genoa-X chips in 2022.

Looking ahead, customer pull for our next-generation 5-nanometer generalist server CPU is very strong. We are on track to launch and ramp production of Genoa as the industry’s highest performance general-purpose server CPU later this year, positioning our data center business for continued growth and share gains.

In addition to Genoa, we have our Bergamo, which is a cloud-optimized capability as well that’s coming online early next year. So there’s a lot of new products that are supporting sort of our growth ambitions.

From what we see today, again, there is a strong customer pull on Genoa.

Lisa Su, AMD CEO (Q2 2022 Earnings Call)

Overall, AMD looks to be set to achieve some serious market share in the server and client PC segment with upcoming Ryzen, Radeon, and EPYC products. We can’t wait to see what AMD is coming out within the next few months.



Read original article here

AMD teases new ‘Dragon Range’ CPUs for high-end gaming laptops

With the release of its Q1 2022 financial results, AMD also revealed plans for its upcoming Ryzen 7000 Zen 4 series laptop CPUs, as seen in a slide tweeted by former Anandtech editor Dr. Ian Cutress. It’s planning to target “extreme gaming laptops” with the new “Dragon Range” series, promising the “highest core, thread and cache ever for a mobile gaming CPU.” It also unveiled the Phoenix series for thin and light gaming laptops.

The Dragon Range features a >55 watt TDP and is designed for laptops thicker than 20mm that are largely designed to be used while plugged in, The Verge reported. They’ll feature a PCIe 5 architecture and DDR5 RAM, though some models could work with more efficient but lower performing LPDDR5, AMD told Cutress. 

As with the Ryzen 9 4900HS chip, the Dragon Range will use the “HS” suffix. Despite the relatively high 55 watt TDP, they’ll be “notably more power efficient than other laptops in that competing timeframe,” according to AMD’s technical marketing director, Robert Hallock. 

Along with the Dragon Range, AMD will launch the Ryzen 7000 Zen 4 “Phoenix” series APUs designed for thin and light laptops under 20mm thick with 35-45 watt TDPs. Those will also use a PCIe 5 architecture, but come primarily with LPDDR5 RAM. As with the Dragon Range, some models could employ DDR5 memory, too.

Ryzen 7000 will launch first on desktop later this year with the the Raphael series, replacing the Ryzen 5000 lineup. Those will be the first Zen 4, AM5 platform chips using TSMC’s 5-nanometer process node to come to the mainstream market. AMD didn’t reveal other details about the Dragon Range and Phoenix laptop chips, but they’re expected to launch sometime in 2023. 

On the earnings side, AMD beat market expectations with revenue at $5.89 billion, a 71 percent boost in sales year-over-year. It also said that starting next quarter, it will break out gaming into a separate financial segment showing sales of chips for consoles (PS5, Xbox Series X, etc.) plus Radeon graphics for PCs as part of a single gaming business, separate from Ryzen chips. The company will explain all that in more detail next month. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Read original article here