Vai al contenuto

Benvenuti su VecchiaSignora.com

Benvenuti su VecchiaSignora.com, il forum sulla Juventus più grande della rete. Per poter partecipare attivamente alla vita del forum è necessario registrarsi

Archiviato

Questa discussione è archiviata e chiusa a future risposte.

Diablo2289

Xbox One VS PS4 VS PC: Dilemma esistenziale 3.0

Post in rilievo

DirectX 12 Ultimate, grafica all'ennesima potenza su PC e Xbox Series X

DirectX 12 Ultimate, grafica all'ennesima potenza su PC e Xbox Series X

DirectX 12 Ultimate è il nome delle nuove librerie di Microsoft che, oltre a offrire pieno supporto alle DirectX Raytracing (DXR) 1.1, garantiscono pieno supporto a Variable Rate Shading, Mesh shader e Sampler Feedback. Una libreria comune a PC e console di nuova generazione.

di Manolo De Agostini pubblicata il 19 Marzo 2020, alle 18:26 nel canale SCHEDE VIDEO 
AMDRadeonGeForceNVIDIAMicrosoftDirectXXbox
 
 
 

Si chiamano DirectX 12 Ultimate le nuove librerie di Microsoft che integrano non solo il massimo supporto al ray tracing, ma anche nuove tecniche come Variable Rate Shading, Mesh shader e Sampler Feedback sfruttabili dagli sviluppatori.

Comune ai PC Windows e alla nuova Xbox Series X, DirectX 12 Ultimate è pienamente supportata dalle schede GeForce RTX serie 20 di Nvidia basate su architettura Turing e dalle soluzioni AMD basate su architettura RDNA 2 (quindi non le attuali Navi come le RX 5700). Saranno pienamente integrate in Windows 10 20H1, il primo feature update di quest'anno del sistema operativo Microsoft, in arrivo a breve.

directx12-ultimate-01-19-03-2020.jpg

Cosa significa il termine DirectX 12 Ultimate in concreto? Il pieno supporto alle librerie DirectX Raytracing 1.1, migliorate rispetto alla versione 1.0, a cui si aggiungono per l'appunto Variable Rate Shading, Mesh shader e Sampler Feedback, che prima erano "soluzioni esterne" all'API. Al momento sul mercato dei giochi PC si contano oltre 30 giochi con supporto ray tracing, tra titoli disponibili e di prossimo arrivo. L'approdo delle console next-gen a fine anno farà lievitare quel numero, trasformando il ray tracing in uno standard di fatto.

Cos'è il ray tracing? Nelle prossime settimane pubblicheremo un approfondimento, ma in linea di massima - per chi non ne fosse ancora a conoscenza - è una tecnica di rendering che permette di riprodurre riflessi, riflessioni e ombre in modo molto più realistico rispetto alla classica rasterizzazione. Un gioco in ray tracing non è renderizzato totalmente in ray tracing, ma si parla di rendering ibrido, in quanto abbiamo una "fusione" di rasterizzazione e ray tracing.

directx12-ultimate-04-19-03-2020.jpg

Il Variable Rate Shading (che approfondiremo anch'esso in un articolo) è una tecnologia importantissima, sia in ambito PC ma soprattutto sul fronte delle console, dove ottimizzare l'uso della GPU è fondamentale per garantire ciò a cui sia Microsoft che Sony puntano: risoluzione 4K a 60 fps oppure frame rate fino a 120 fps per i giochi più leggeri, in stile Fortnite. Il Variable Rate Shading consente agli sviluppatori di focalizzare la potenza della GPU su aree specifiche dell'immagine di gioco, ad esempio determinati personaggi, un veicolo e la pista su cui si muove o altro, e meno su altre aree che si possono definire "secondarie".

Il resto dell'immagine fuori dal focus del videogiocatore può essere riprodotto con un livello grafico leggermente inferiore, permettendo così alle GPU di raggiungere un frame rate più alto. Di fatto, in una scena di gioco solo una frazione delle aree richiede il massimo dettaglio, mentre le altre possono anche perdere qualche "sfumatura" poco importante ai fini della qualità complessiva, ma importante per guadagnare prestazioni.

L'esempio più facile è quello di un gioco di corse automobilistiche: è necessario riprodurre al massimo del dettaglio grafico le vetture e la strada, ma al tempo stesso non è importante se il pubblico sugli spalti è leggermente meno definito – il che non significa che vi sia un crollo dell'aspetto grafico, è un calo ridotto non facilmente percepibile. D'altronde le auto si muovono rapidamente e l'attenzione dei nostri occhi è su altri aspetti. L'impatto visivo per il videogiocatore è di conseguenza nullo o quasi, ma le prestazioni migliorano sensibilmente.

directx12-ultimate-03-19-03-2020.jpg

Mesh shader, una soluzione di cui ha parlato per prima Nvidia presentando le GPU Turing, offre agli sviluppatori nuove possibilità per realizzare scene più complesse ed evitare alcuni colli di bottiglia in cui gli sviluppatori s'imbattono oggigiorno. Ad esempio, il nuovo approccio permette alla memoria di essere letta una volta e mantenuta sul chip.

Nel caso dell'architettura Turing di Nvidia, i mesh shader permettono di usare thread in modo cooperativo per generare mesh compatte direttamente sul chip, a disposizione del rasterizzatore. Si tratta di un approccio a due fasi, più semplice, benefico per applicazioni e giochi con una complessità geometrica elevata. Questo approccio migliora la programmabilità della pipeline geometrica, permettendo l'implementazione di avanzate tecniche di culling o la generazione di una topologia completamente procedurale. Nvidia ha pubblicato la demo Asteroids, in cui mostra i mesh shader in azione, che potete vedere qui sopra.

directx12-ultimate-02-19-03-2020.jpg

Infine, Sampler Feedback. Si tratta di una tecnologia che consente agli sviluppatori di catturare e registrare informazioni e posizioni sul campionamento delle texture, direttamente in hardware. Ciò permette di caricare in memoria solo le porzioni di texture di cui la GPU ha bisogno per una scena, al momento giusto. Poiché evita lo spreco di caricare in memoria le porzioni di texture che non sono necessarie, permette di sfruttare al meglio la memoria fisica e lo storage a disposizione. Ciò è particolarmente utile nel rendering a risoluzioni elevate come il 4K, durante le quali le texture ad alta qualità richiedono grandi insiemi (pool) di memoria

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

Inside Xbox Series X: the full specs

We visit Microsoft for a briefing on the impressive tech of its next flagship console.

Article by Richard Leadbetter, Technology Editor, Digital Foundry
Updated on 16 March 2020

This is it. After months of teaser trailers, blog posts and even the occasional leak, we can finally reveal firm, hard facts on Xbox Series X. We visited Microsoft's Redmond WA mothership in the first week of March, we saw the unit, handled it, played on it and even constructed it from its component parts. We've seen the NVMe expandable storage, we've had our first taste of hardware accelerated ray tracing on next-gen console and we've seen how one of Microsoft's most talented developers is looking to enhance one of the most technically impressive games available today for the new Xbox. We've had a taster of some brilliant backwards compatibility features - and yes, today we can reveal the full, official specification for the Xbox Series X console.

ARTICLE CONTINUES BELOW
 

There's a vast amount of material to share but for now, we'll be trying to deliver the key points with the promise of much more to come. In this piece, we'll be looking in depth at the tech powering the new machine, and we'll reveal:

  • How Series X is more than twice as powerful as Xbox One X in practice;
  • The difference its hardware accelerated ray tracing will make to the look of your games;
  • How its radical approach to memory and fast storage could be a game-changer - including the amazing Quick Resume feature;
  • Microsoft's war on input lag and screen tearing;
  • And some impressive compatibility features, including automated HDR for older games!

It all starts with the three key tenets that the next generation Xbox is built upon: power, speed and compatibility. Microsoft doubtless has its own messaging to share built around these pillars, but they also serve as a solid foundation for our story too.

Just how powerful is Xbox Series X?

With power, it all begins with the Project Scarlett SoC - system on chip. The processor is fabricated on an enhanced rendition of TSMC's 7nm process, which we understand rolls up a bunch of improvements to the technology, right up to but not including the new EUV-based 7nm+. The chip itself is a 360mm2 slice of silicon (significantly smaller than we speculated), that pairs customised versions of AMD's Zen 2 CPU core with 12.155 teraflops of GPU compute power.

As expected, we're getting eight CPU cores and 16 threads, delivered via two quad-core units on the silicon, with one CPU core (or two threads) reserved for running the underlying operating system and the front-end 'shell'. Microsoft is promising a 4x improvement in both single-core and overall throughput over Xbox One X - and CPU speeds are impressive, with a peak 3.8GHz frequency. This is when SMT - or hyper-threading - is disabled. Curiously, developers can choose to run with eight physical cores at the higher clock, or all cores and threads can be enabled with a lower 3.6GHz frequency. Those frequencies are completely locked and won't adjust according to load or thermal conditions - a point Microsoft emphasised several times during our visit.

ARTICLE CONTINUES BELOW
 

In our PC-based tests, having SMT enabled can deliver up to 30 per cent - or more - of additional performance in well-threaded applications. However, for launch titles at least, Microsoft expects developers to opt for the higher 3.8GHz mode with SMT disabled. "From a game developer's perspective, we expect a lot of them to actually stick with the eight cores because their current games are running with the distribution often set to seven cores and seven worker threads," explains Microsoft technical fellow and Xbox system architect Andrew Goossen. "And so for them to go wider, for them to go to 14 hardware threads, it means that they have the system to do it, but then, you have to have workloads that split even more effectively across them. And so we're actually finding that the vast majority of developers - talking with them about the their choices for launch - the vast majority are going to go with the SMT disabled and the higher clock."

 
 
poster
 
      
 
 
360p geselecteerd als afspeelkwaliteit1080p geselecteerd als afspeelkwaliteit
 
logo
 
  • Xbox Series X Complete Specs + Ray Tracing/Gears 5/Back-Compat/Quick Resume Demo Showcase!
 
 
 
28:11
 
 
 
 
 
 
 
 
 

A video presentation of the Xbox Series X's specifications and features - and a look at a range of impressive demos showing the key technologies in action.

There are customisations to the CPU core - specifically for security, power and performance, and with 76MB of SRAM across the entire SoC, it's reasonable to assume that the gigantic L3 cache found in desktop Zen 2 chips has been somewhat reduced. The exact same Series X processor is used in the Project Scarlett cloud servers that'll replace the Xbox One S-based xCloud models currenly being used. For this purpose, AMD built in EEC error correction for GDDR6 with no performance penalty (there is actually no such thing as EEC-compatible G6, so AMD and Microsoft are rolling their own solution), while virtualisation features are also included. And this leads us on to our first mic-drop moment: the Series X processor is actually capable of running four Xbox One S game sessions simultaneously on the same chip, and contains an new internal video encoder that is six times as fast as the more latent, external encoder used on current xCloud servers.

ARTICLE CONTINUES BELOW
 

But up until now at least, the focus has been on the GPU, where Microsoft has delivered 12 teraflops of compute performance via 3328 shaders allocated to 52 compute units (from 56 in total on silicon, four disabled to increase production yield) running at a sustained, locked 1825MHz. Once again, Microsoft stresses the point that frequencies are consistent on all machines, in all environments. There are no boost clocks with Xbox Series X.

"12 TFLOPs was our goal from the very beginning. We wanted a minimum doubling of performance over Xbox One X to support our 4K60 and 120 targets. And we wanted that doubling to apply uniformly to all games," explains Andrew Goossen. "To achieve this, we set a target of 2x the raw TFLOPs of performance knowing that architectural improvements would make the typical effective performance much higher than 2x. We set our goal as a doubling of raw TFLOPs of performance before architectural improvements were even considered - for a few reasons. Principally, it defined an audacious target for power consumption and so defined our whole system architecture.

"But also, in the early stages of design, it's difficult for us to accurately predict the uplift from architectural improvements across our worst cases. Our bar was a doubling in all cases, not just an average. So the most practical engineering way to ensure baseline 2x improvement across the worst cases logged in all games was to set a goal of twice the raw TFLOPs performance. So then we concentrated our efforts on making the effective performance even higher with architectural improvements and new features."

7 We've got a separate piece covering the basics of the Series X's form factor - but right now, here's the console in its horizontal configuration.
  Xbox Series X Xbox One X Xbox One S
CPU 8x Zen 2 Cores at 3.8GHz (3.6GHz with SMT) 8x Custom Jaguar Cores at 2.13GHz 8x Custom Jaguar Cores at 1.75GHz
GPU 12 TFLOPs, 52 CUs at 1.825GHz, Custom RDNA 2 6 TFLOPs, 40 CUs at 1.172GHz, Custom GCN + Polaris Features 1.4 TFLOPS, 12 CUs at 914MHz, Custom GCN GPU
Die Size 360.45mm2 366.94mm2 227.1mm2
Process TSMC 7nm Enhanced TSMC 16nmFF+ TSMC 16nmFF
Memory 16GB GDDR6 12GB GDDR5 8GB DDR3, 32MB ESRAM
Memory Bandwidth 10GB at 560GB/s, 6GB at 336GB/s 326GB/s 68GB/s, ESRAM at 219GB/s
Internal Storage 1TB Custom NVMe SSD 1TB HDD 1TB HDD
IO Throughput 2.4GB/s (Raw), 4.8GB/s (Compressed) 120MB/s 120MB/s
Expandable Storage 1TB Expansion Card - -
External Storage USB 3.2 HDD Support USB 3.2 HDD Support USB 3.2 HDD Support
Optical Drive 4K UHD Blu-ray Drive 4K UHD Blu-ray Drive 4K UHD Blu-ray Drive
Performance Target 4K at 60fps - up to 120fps 4K at 30fps - up to 60fps 1080p at 30fps up to 60fps
ARTICLE CONTINUES BELOW
 

We've demonstrated across the months that AMD's RDNA architecture offers substantially more 'performance for your teraflop', owing to the radical new design in combination with much higher clocks (the Series X GPU runs with a 56 per cent frequency advantage up against Xbox One X), but there are multipliers that should come into effect through the use of new features baked into the design such as variable rate shading, which basically attempts to increase and decrease rendering precision based on visibility.

However, even basic ports which barely use any of the Series X's new features are delivering impressive results. The Coalition's Mike Rayner and Colin Penty showed us a Series X conversion of Gears 5, produced in just two weeks. The developers worked with Epic Games in getting UE4 operating on Series X, then simply upped all of the internal quality presets to the equivalent of PC's ultra, adding improved contact shadows and UE4's brand-new (software-based) ray traced screen-space global illumination. On top of that, Gears 5's cutscenes - running at 30fps on Xbox One X - were upped to a flawless 60fps. We'll be covering more on this soon, but there was one startling takeaway - we were shown benchmark results that, on this two-week-old, unoptimised port, already deliver very, very similar performance to an RTX 2080.

"I think relative to where we're at and just looking at our experience with the hardware with this particular game, I think we're really positive to kind of see how this thing is performing, especially knowing how much untapped performance is still there in the box based on the work we've done so far," enthuses Coalition tech director Mike Rayner. "Gears 5 will be optimised, so the work that you've seen today will be there, available at launch on Xbox Series X. The title will support Smart Delivery, so if you already have the title in whatever form you'll be able to get it on Series X for free."

It was an impressive showing for a game that hasn't even begun to access the next generation features of the new GPU. Right now, it's difficult to accurately quantify the kind of improvement to visual quality and performance we'll see over time, because while there are obvious parallels to current-gen machines, the mixture of new hardware and new APIs allows for very different workloads to run on the GPU. Machine learning is a feature we've discussed in the past, most notably with Nvidia's Turing architecture and the firm's DLSS AI upscaling. The RDNA 2 architecture used in Series X does not have tensor core equivalents, but Microsoft and AMD have come up with a novel, efficient solution based on the standard shader cores. With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

ARTICLE CONTINUES BELOW
 
SoC Mana from heaven for silicon fans: a CG visualisation of how the various components within the Series X SoC are positioned within the chip.

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning."

Other forward-looking features also make the cut. Again, similar to Nvidia's existing Turing architecture, mesh shaders are incorporated into RDNA 2, allowing for a potentially explosive improvement in geometric detail.

"As GPUs have gotten wider and computing performance has increased, geometry processing has become more and more bound on the fixed function vertex issue triangle setup and tessellation blocks of the GPU," reveals Goossen. "Mesh shading allows developers to completely bypass those fixed function bottlenecks by providing an optional alternative to the existing parts of the GPU pipeline. In addition to performance, mesh shading offers developers flexibility and memory savings. Mesh shading will allow game developers to increase detail in the shapes and animations of objects and render more complex scenes with no sacrifice to frame-rate."

There is more. Much more. For example, the Series X GPU allows for work to be shared between shaders without involvement from the CPU, saving a large amount of work for the Zen 2 cores, with data remaining on the GPU. However, the big innovation is clearly the addition of hardware accelerated ray tracing. This is hugely exciting and at Digital Foundry, we've been tracking the evolution of this new technology via the DXR and Vulkan-powered games we've seen running on Nvidia's RTX cards and the console implementation of RT is more ambitious than we believed possible.

ARTICLE CONTINUES BELOW
 

The ray tracing difference

RDNA 2 fully supports the latest DXR Tier 1.1 standard, and similar to the Turing RT core, it accelerates the creation of the so-called BVH structures required to accurately map ray traversal and intersections, tested against geometry. In short, in the same way that light 'bounces' in the real world, the hardware acceleration for ray tracing maps traversal and intersection of light at a rate of up to 380 billion intersections per second.

"Without hardware acceleration, this work could have been done in the shaders, but would have consumed over 13 TFLOPs alone," says Andrew Goossen. "For the Series X, this work is offloaded onto dedicated hardware and the shader can continue to run in parallel with full performance. In other words, Series X can effectively tap the equivalent of well over 25 TFLOPs of performance while ray tracing."

It is important to put this into context, however. While workloads can operate at the same time, calculating the BVH structure is only one component of the ray tracing procedure. The standard shaders in the GPU also need to pull their weight, so elements like the lighting calculations are still run on the standard shaders, with the DXR API adding new stages to the GPU pipeline to carry out this task efficiently. So yes, RT is typically associated with a drop in performance and that carries across to the console implementation, but with the benefits of a fixed console design, we should expect to see developers optimise more aggressively and also to innovate. The good news is that Microsoft allows low-level access to the RT acceleration hardware.

Minecraft DXR On Minecraft DXR On Minecraft DXR On Minecraft DXR On Minecraft DXR On Minecraft DXR On
Launch comparison tool

"[Series X] goes even further than the PC standard in offering more power and flexibility to developers," reveals Goossen. "In grand console tradition, we also support direct to the metal programming including support for offline BVH construction and optimisation. With these building blocks, we expect ray tracing to be an area of incredible visuals and great innovation by developers over the course of the console's lifetime."

ARTICLE CONTINUES BELOW
 

The proof of the pudding is in the tasting, of course. During our time at the Redmond campus, Microsoft demonstrated how fully featured the console's RT features are by rolling out a very early Xbox Series X Minecraft DXR tech demo, which is based on the Minecraft RTX code we saw back at Gamescom last year and looks very similar, despite running on a very different GPU. This suggests an irony of sorts: base Nvidia code adapted and running on AMD-sourced ray tracing hardware within Series X. What's impressive about this is that it's fully path-traced. Aside from the skybox and the moon in the demo we saw, there are no rasterised elements whatsoever. The entire presentation is ray traced, demonstrating that despite the constraints of having to deliver RT in a console with a limited power and silicon budget, Xbox Series X is capable of delivering the most ambitious, most striking implementation of ray tracing - and it does so in real time.

Minecraft DXR is an ambitious statement - total ray tracing, if you like - but we should expect to see the technology used in very different ways. "We're super excited for DXR and the hardware ray tracing support," says Mike Rayner, technical director of the Coalition and Gears 5. "We have some compute-based ray tracing in Gears 5, we have ray traced shadows and the [new] screen-space global illumination is a form of ray traced screen-based GI and so, we're interested in how the ray tracing hardware can be used to take techniques like this and then move them out to utilising the DXR cores.

"I think, for us, the way that we've been thinking about it is as we look forward, we think hybrid rendering between traditional rendering techniques and then using DXR - whether for shadows or GI or adding reflections - are things that can really augment the scene and [we can] use all of that chip to get the best final visual quality."

 
 
poster
 
       
 
 
360p geselecteerd als afspeelkwaliteit1080p geselecteerd als afspeelkwaliteit
 
logo
 
  • DF Direct: Hands-On With Xbox Series X + Impressions + Xbox One X Size Comparisons!
 
 
 
22:54
 
 
 
 
 
 
 
 
 

In this DF Direct shot on location in Redmond WA, Rich Leadbetter and John Linneman discuss their initial reactions to Xbox Series X directly after a day of deep-dive presentations.

ARTICLE CONTINUES BELOW
 

Efficiency in design

One of the key takeaways for me about the Series X silicon isn't just the power, but also the efficiency in design. With all of the new graphics features and the 12 teraflops of consistent compute performance, we envisaged a monstrously large, prohibitively expensive processor design - in short, a very expensive console. However, the size of the SoC at 360mm2 means we have a slice of silicon that is, in reality, much smaller than any speculative measurement we could come up with from prior teaser reveals - its 15.3 billion transistors mean that we are looking at just over twice the transistor density seen on the 16nmFF Xbox One X processor, and yet we are getting significantly more than twice the performance across the board.

However, achieving the performance, power and silicon area targets Microsoft set for itself did require some innovative thinking. Graphics power isn't just about teraflops - compute power needs to be backed up with memory bandwidth, presenting a unique challenge for a console. Microsoft's solution for the memory sub-system saw it deliver a curious 320-bit interface, with ten 14gbps GDDR6 modules on the mainboard - six 2GB and four 1GB chips. How this all splits out for the developer is fascinating.

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell. From Microsoft's perspective, it is still a unified memory system, even if performance can vary. "In conversations with developers, it's typically easy for games to more than fill up their standard memory quota with CPU, audio data, stack data, and executable data, script data, and developers like such a trade-off when it gives them more potential bandwidth," says Goossen.

ARTICLE CONTINUES BELOW
 
soc_board2 A render highlighting Microsoft's custom 320-bit memory interface.

It sounds like a somewhat complex situation, especially when Microsoft itself has already delivered a more traditional, wider memory interface in Xbox One X - but the notion of working with much faster GDDR6 memory presented some challenges. "When we talked to the system team there were a lot of issues around the complexity of signal integrity and what-not," explains Goossen. "As you know, with the Xbox One X, we went with the 384[-bit interface] but at these incredible speeds - 14gbps with the GDDR6 - we've pushed as hard as we could and we felt that 320 was a good compromise in terms of achieving as high performance as we could while at the same time building the system that would actually work and we could actually ship."

The power tenet is well taken care of, then, but it's not just about the raw compute performance - the feature set is crucial too. Way back in 2016, a year before work completed on Xbox One X, the Xbox silicon team was already working on Series X, beginning the architectural work on the next generation features that we'll finally see hitting the market at holiday 2020 - a keen reminder of how long it takes for new technology to be developed. Even back then, ray tracing was on the agenda - and the need for a revolutionary approach to storage was also required, all of which brings us to the second tenet of the Series X hardware design: a fundamental shift away from mechanical hard drives, embracing solid-state storage instead.

Why fast storage changes everything

The specs on this page represent only the tiniest fraction of the potential of the storage solution Microsoft has engineered for the next generation. In last year's Project Scarlett E3 teaser, Jason Ronald - partner director of project management at Xbox - described how the SSD could be used as 'virtual memory', a teaser of sorts that only begins to hint at the functionality Microsoft has built into its system.

On the hardware level, the custom NVMe drive is very, very different to any other kind of SSD you've seen before. It's shorter, for starters, presenting more like a memory card of old. It's also rather heavy, likely down to the solid metal construction that acts as a heat sink that was to handle silicon that consumes 3.8 watts of power. Many PC SSDs 'fade' in performance terms as they heat up - and similar to the CPU and GPU clocks, this simply wasn't acceptable to Microsoft, who believe that consistent performance across the board is a must for the design of their consoles.

ARTICLE CONTINUES BELOW
 

The form factor is cute, the 2.4GB/s of guaranteed throughput is impressive, but it's the software APIs and custom hardware built into the SoC that deliver what Microsoft believes to be a revolution - a new way of using storage to augment memory (an area where no platform holder will be able to deliver a more traditional generational leap). The idea, in basic terms at least, is pretty straightforward - the game package that sits on storage essentially becomes extended memory, allowing 100GB of game assets stored on the SSD to be instantly accessible by the developer. It's a system that Microsoft calls the Velocity Architecture and the SSD itself is just one part of the system.

"Our second component is a high-speed hardware decompression block that can deliver over 6GB/s," reveals Andrew Goossen. "This is a dedicated silicon block that offloads decompression work from the CPU and is matched to the SSD so that decompression is never a bottleneck. The decompression hardware supports Zlib for general data and a new compression [system] called BCPack that is tailored to the GPU textures that typically comprise the vast majority of a game's package size."

storage PCI Express 4.0 connections hook up both internal and optional external SSDs directly to the processor.

The final component in the triumvirate is an extension to DirectX - DirectStorage - a necessary upgrade bearing in mind that existing file I/O protocols are knocking on for 30 years old, and in their current form would require two Zen CPU cores simply to cover the overhead, which DirectStorage reduces to just one tenth of single core.

"Plus it has other benefits," enthuses Andrew Goossen. "It's less latent and it saves a ton of CPU. With the best competitive solution, we found doing decompression software to match the SSD rate would have consumed three Zen 2 CPU cores. When you add in the IO CPU overhead, that's another two cores. So the resulting workload would have completely consumed five Zen 2 CPU cores when now it only takes a tenth of a CPU core. So in other words, to equal the performance of a Series X at its full IO rate, you would need to build a PC with 13 Zen 2 cores. That's seven cores dedicated for the game: one for Windows and shell and five for the IO and decompression overhead."

ARTICLE CONTINUES BELOW
 

Asset streaming is taken to the next level, but Microsoft wasn't finished there. Last-gen, we enjoyed a 16x increase in system memory, but this time it's a mere 2x - or just 50 per cent extra if we consider Xbox One X as the baseline. In addition to drawing more heavily upon storage to make up the shortfall, Microsoft began a process of optimising how memory is actually used, with some startling improvements.

"We observed that typically, only a small percentage of memory loaded by games was ever accessed," reveals Goossen. "This wastage comes principally from the textures. Textures are universally the biggest consumers of memory for games. However, only a fraction of the memory for each texture is typically accessed by the GPU during the scene. For example, the largest mip of a 4K texture is eight megabytes and often more, but typically only a small portion of that mip is visible in the scene and so only that small portion really needs to be read by the GPU."

external Microsoft has partnered with Seagate for its proprietary external 1TB SSD expansion. It's very short, quite weighty for its dimensions and actually presents rather like a memory card.

As textures have ballooned in size to match 4K displays, efficiency in memory utilisation has got progressively worse - something Microsoft was able to confirm by building in special monitoring hardware into Xbox One X's Scorpio Engine SoC. "From this, we found a game typically accessed at best only one-half to one-third of their allocated pages over long windows of time," says Goossen. "So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

ARTICLE CONTINUES BELOW
 

A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes. Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later. Microsoft considers these aspects of the Velocity Architecture to be a genuine game-changer, adding a multiplier to how physical memory is utilised.

The Velocity Architecture also facilitates another feature that sounds impressive on paper but is even more remarkable when you actually see it play out on the actual console. Quick Resume effectively allows users to cycle between saved game states, with just a few seconds' loading - you can see it in action in the video above. When you leave a game, system RAM is cached off to SSD and when you access another title, its cache is then restored. From the perspective of the game itself, it has no real idea what is happening in the background - it simply thinks that the user has pressed the guide button and the game can resume as per normal.

We saw Xbox Series X hardware cycling between Forza Motorsport 7 running in 4K60 Xbox One X mode, State of Decay 2, Hellblade and The Cave (an Xbox 360 title). Switching between Xbox One X games running on Series X, there was around 6.5 seconds delay switching from game to game - which is pretty impressive. Microsoft wasn't sharing the actual size of the SSD cache used for Quick Resume, but saying that the feature supports a minimum of three Series X games. Bearing in mind the 13.5GB available to titles, that's a notional maximum of around 40GB of SSD space, but assuming that the Velocity Architecture has hardware compression features as well as decompression, the actual footprint may be smaller. Regardless, titles that use less memory - like the games we saw demonstrated - should have a lower footprint, allowing more to be cached.

The war on input lag and screen tearing

Microsoft's speed tenet for Series X also factors in a radical revamp of input processing, designed to shave off latency on every conceivable part of the game's pipeline - meaning that the time taken between button press to resulting reaction on-screen should reduce significantly. Microsoft has already mentioned Dynamic Latency Input, but only now reveals just how extensive its work is here. It starts with the controller, where the typical 8ms latency on analogue controller input is now reduced significantly by transmitting the most up to date inputs just before the game needs them. Digital inputs like button presses are time-stamped and sent to the game, reducing latency without the need of increasing the polling rate, while USB-connected pads see digital inputs transmitted immediately to the console. To facilitate all of this, the entire input software stack was rewritten, which delivered further latency improvements.

ARTICLE CONTINUES BELOW
 

Latency has been a crucial, but invisible variable for developers to contend with and as game engines grow more complex and more parallel, it's not easy to keep track of additional lag - something else Microsoft attempts to resolve with DLI. "We made it easier for game developers to optimise in-game latency. Games on Xbox output an identifier for every frame as it flows through its engine," explains Andrew Goossen. "When it queries controller input, it associates that frame identifier with the timing of the input and when it completes rendering for that frame, it passes that identifier along with the completed front buffer information to the system. So with this mechanism, this system can now determine the complete in-game latency for every frame."

Microsoft says it's delivered a system that allows developers to accurately track input lag across the engine just as easily as game-makers can track frame-rate - the metric has been added to its in-house performance analysis tool, Pix. The final element of DLI is Xbox Series X's support for the new wave of 120Hz HDMI 2.1 displays hitting markets now. The firm has already began testing of this feature at lower-than-4K output resolutions on supported HDMI 2.0 screens via Xbox One S and Xbox One X. Because the screens are updating twice as quickly as their 60Hz equivalents, users should have faster response - a state of affairs that should also apply to variable refresh rate (VRR) modes too. Microsoft has also pioneered ALLM modes in its existing machines, meaning that the console can command the display to shift automatically into game mode.

new_pad The Xbox One pad evolves - smaller, more accessible to people with smaller hands and now featuring a revamped d-pad and share button.

Microsoft has also made innovations that may also see the end of screen-tearing. Typically, displaying a new frame during scan-out is used to cut latency. Triple-buffering can even out drops to frame-rate, but can add extra lag - but Series X sees this situation evolve. "We redesigned the presentation API the games use to send their completed frames to the TV," shares Andrew Goossen. "We completely decoupled the traditional link between double- or triple-buffering and latency. It used to be that triple buffering was good to improve frame-rate when the game couldn't maintain their target frame-rate, but triple buffering was bad because it increased latency. But no longer. Now frame buffering and latency are fully decoupled, games can enable triple-buffering while separately specifying their desired latency. So that latency between the CPU frame start time and the GPU frame start time can now be specified in microseconds, rather than v-syncs.

ARTICLE CONTINUES BELOW
 

"So, game developers can precisely dial down the latency between the CPU and the GPU until just before bubbles start to form or the GPU might idle because the CPU isn't feeding it fast enough - and the runtime provides extensive latency feedback statistics for the game to inform the dynamic adjustment. So using this mechanism, the games can very precisely reduce the in-game latency as much as possible - and quite easily as well."

While enhancements and optimisations - not to mention a new share button - are added to the Xbox Series X controller, the good news is that the DLI technology is compatible with existing pads, which should be upgraded with a simple firmware update.

How older games will play better on Series X

The last of Microsoft's three tenets that form the foundation of its next-gen endeavours is compatibility, an area where the firm has delivered remarkable levels of fan service since Xbox 360 backwards compatibility was first revealed to an incredulous audience at E3 2015. The firm has already announced that its existing library of back-compat Xbox 360 and OG Xbox games will run on Series X, while all existing peripherals will also work as they should (which, in part, explains why type-A USB is used on the system as opposed to the new USB-C standard). So yes, the steering wheel tax is over.

Beyond that, the Xbox back-compat team have been hard at work since drawing the line under their Xbox 360 and X-enhanced program a while back. It likely comes as no surprise to discover that Series X can technically run the entire Xbox One catalogue, but this time it's done with no emulation layer - it's baked in at the hardware level. Games also benefit from the full CPU and GPU clocks of Series X (Xbox One X effectively delivered 50 per cent of its overall graphics power for back-compat), meaning that the more lacklustre of those performance modes added to many Xbox One X games should hopefully lock to a silky smooth 60fps.

However, the compatibility team is renowned for pushing the envelope and some of the early work we saw with Series X is mouthwatering. Microsoft has already promised improved image fidelity, steadier frame-rate and faster loading times, but the early demos we saw look even more promising - and it is indeed the case that hints dropped in Phil Spencer's recent Series X blog post will result in selected Xbox One S titles running at higher resolutions on the new console. In fact, we saw Gears of War Ultimate Edition operating with a 2x resolution scale on both axes, taking a 1080p game all the way up to native 4K. It's an evolution of the Heutchy Method used to bring Xbox 360 720p titles up to full 4K, with often spectacular results. Crucially, the back-compat team does all the heavy lifting at the system level - game developers do not need to participate at all in the process.

Xbox Series X Native 4K Rendering Xbox Series X Native 4K Rendering Xbox Series X Native 4K Rendering Xbox Series X Native 4K Rendering
Launch comparison tool
ARTICLE CONTINUES BELOW
 

"We are exploring ways of improving, maybe, a curated list of games," says Peggy Lo, compatibility program lead for Xbox. "Things that we are are looking at include improving resolution for games, improving frame-rates - maybe doubling them! And the way we're doing it is really exploring multiple methods. So we knew what we were doing with the Heutchy Method, maybe we'll change it a bit, there's a there's a few other methods that we're exploring.

"What we're probably not going to do is explain all those methods today because we're still in the process of figuring out what exact method will be best for the Series X but I want you to feel confident that we have a solution that we can fall back on or that we will always keep pushing forward to."

Microsoft set up two LG OLED displays, one running Gears Ultimate at its standard 1080p on Xbox One X (the game never received an X upgrade) and at native 4K on Series X. On-screen debug data revealed the amount of render targets the console was running at a higher resolution, along with the resolution scaling factor and the new native resolution - in this case, a scale of 2.0 and a 3840x2160 pixel count. The notion of displaying such a precise scaling factor made me wonder if it could actually go higher - whether 720p or 900p titles could also scale to native 4K. It's a question that went unanswered, though Lo chuckled when I asked.

Further goodies were to come - and owners of HDR screens are going to love the second key feature I saw. We got to see the Xbox One X enhanced version of Halo 5 operating with a very convincing HDR implementation, even though 343 Industries never shipped the game with HDR support. Microsoft ATG principal software engineer Claude Marais showed us how a machine learning algorithm using Gears 5's state-of-the-art HDR implementation is able to infer a full HDR implementation from SDR content on any back-compat title. It's not fake HDR either, Marais rolled out a heatmap mode showing peak brightness for every on-screen element, clearly demonstrating that highlights were well beyond the SDR range.

hdr A heatmap of Halo 5 luminance, comparing the standard presentation with the machine learning-based auto HDR presentation. Note that many brighter elements map into HDR space on the right. Click directly on the image for higher resolution.
ARTICLE CONTINUES BELOW
 

"It can be applied to all games theoretically, technically, I guess we're still working through user experiences and things like that but this is a technical demo," revealed Marais. "So this [Halo 5] is four years old, right, so let's go to the extreme and jump to a game that is 19, 20 years old right now - and that is Fusion Frenzy. Back then there's nothing known about HDR, no-one knew about HDR things. Games just used 8-bit back buffers."

This was a show-stopping moment. It was indeed Fusion Frenzy - an original Xbox title - running with its usual 16x resolution multiplier via back-compat, but this time presented with highly convincing, perceptibly real HDR. The key point is that this is proposed as a system-level feature for Xbox Series X, which should apply to all compatible games that don't have their own bespoke HDR modes - and as Marais demonstrated, it extends across the entire Xbox library.

"But you can think of other things that we could do," Marias adds. "Let's look at accessibility. If you have people that cannot read well or see well, you probably want to enhance contrast when there's a lot of text on-screen. We can easily do that. We talked to someone that's colourblind this morning and that's a great example. We just switch on the LUT and we can change colours for them to more easily experience the announcement there."

It's clear that there's a lot of love for the Xbox library and that the back-compat team are hugely excited about what they do. "Hopefully you realise that we are still quite passionate about this," says Peggy Lo. "It's a very personal project for a lot of us and we are committed to keep doing this and making all your games look best on Series X."

Power, speed, compatibility. Microsoft made a convincing pitch for all of the foundational pillars of Series X - and remarkably, there's still more to share. After the initial presentations, we headed over to Building 37 on the Microsoft campus, where principle designer Chris Kujawski and his colleagues gave us a hands-on look at the Series X hardware, a detailed breakdown of its interior components and everything we could possibly want to know about its innovative form-factor, along with the subtle, but effective refinements made to the Xbox controller. The bottom line? There is still so much to share about Xbox Series X and we're looking forward to revealing more.

Digital Foundry was invited to Microsoft in Redmond WA during early March to cover the Xbox Series X specs reveal. Microsoft paid for travel and accommodation.

Sometimes we include links to online retail stores. If you click on one and make a purchase we may receive a small commission. For more information, go here.

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

PROJECT XCLOUD: MICROSOFT STA TRASFERENDO I BLADE SERVER SU XBOX SERIES X

 
Project xCloud: Microsoft sta trasferendo i blade server su Xbox Series X
 
 
INFORMAZIONI GIOCO
  • 5
  •  
  • 8
  •  
  •  
  •  
  •  
 
Di 

Riccardo Arioli Ruelli

 
 
21 Marzo 2020, Ore 19:53

Un rapporto pubblicato su The Verge ha confermato che Microsoft sta continuando a potenziare i propri blade server per Project xCloud. Secondo quanto si legge infatti ogni rack contiene ora ben otto Xbox One S, rispetto alle quattro console della configurazione iniziale.

Il colosso di Redmond non sembra intenzionato a fermarsi: lo stesso rapporto racconta infatti di come Microsoft stia lavorando per trasferire tutti i server dedicati a xCloud su Xbox Series X, garantendo in questo modo significativi miglioramenti delle prestazioni tra le quali primeggia un nuovo codec video che è fino a sei volte più veloce dell'attuale. Il nuovo processore della console next-gen sarebbe infatti in grado di eseguire quattro sessioni di gioco Xbox One S contemporaneamente per singolo chip. Microsoft si sta insomma preparando ad uscire dalla fase di test iniziale, portando il suo progetto dedicato allo streaming ad un livello più avanzato.

La società ha inoltre iniziato i test di xCloud anche su PC Windows 10 con un applicazione del tutto simile a quanto visto su iOS e Android che attualmente gira con una risoluzione limitata a 720p, anche se a detta della società il 1080p sembra un obbiettivo molto vicino. Prima di lasciarvi, vi ricordiamo che sulle nostre pagine è disponibile uno speciale su Xbox Series X.

FONTE:WCCFTECH
 
  •  
  •  
  •  
  •  
  •  
3
 

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

Leak: Xbox Lockhart Benchmarks at ‘Series S’ Performance

hVYXnTj67epcjUAHggkhgF-1024-80-735x400.j

Could be up to 7.9 teraflops of power.


Multiple credible reports (Windows CentralKotakuThe Verge) have been all but confirming the existence of the entry-level model known as Xbox Lockhart. We now have more evidence through a possible benchmark score.
Leaker _rogame spotted what might be Lockhart’s accelerated processing unit (APU), consisting of a 4.0-GHz octa-core processor with 16GB of GDDRX memory (12GB RAM, 4GB of VRAM) and an unknown GPU. The APU delivers between 7.0 and 7.9 teraflops based on leaked benchmark results, German tech site WinFuture speculates. Much higher than the 4-teraflops other outlets have reported.
This unknown APU scored 7,100 points on the 3DMark Time Spy test, which puts it at about a Ryzen 9 3900X CPU paired with an Nvidia GeForce GTX 1660 SUPER GPU (via Notebookcheck). Since we know Microsoft will use AMD GPUs in its next-gen consoles… this score is about where the Radeon RX 5600 XT achieved.
ALL speculation… as Microsoft hasn’t acknowledged the existence of the Lockhart. But leaker _rogame refers to it as the Xbox Series S.
Do you buy that

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

No, the PS5 won’t offer anywhere near the graphics performance of Xbox Series X: Navi benchmarks prove it

The PS5 will almost certainly face a performance deficit against the Series X (Image source: Playstation UK) The PS5 will almost certainly face a performance deficit against the Series X (Image source: Playstation UK)
Sony’s claim that high clockspeeds offset its meagre shader allocation on the PlayStation 5 doesn’t hold water when Navi overclocking results are factored in. Due to non-linear performance/clock scaling, the likely performance deficit between the two consoles is in the 25-30 percent range, which may have a major impact on 4K performance.
by Arjun Krishna Lal, 2020/03/20
 
 

During Sony’s Playstation 5 spec unveiling, Sony made much of the fact that the PS5’s GPU was more “agile” than the competition. The logic they offered was that, because GPU clockspeeds are tied to more than just the shader cores, higher clocks mean higher throughput across the chip, which can offset the lack of hardware shader resources. 

This was an arrow shot right across Microsoft’s bow: Redmond, days earlier, had revealed the Xbox Series X’s immensely powerful 12 TFLOP GPU, the fastest GPU AMD has evermade.  The 52 CU part features 3328 shaders and runs at 1800 MHz, right in line with what we’ve seen in other Navi parts like the RX 5700. It’s backed by GDDR6 memory that delivers 561 GB/s of bandwidth. Sony, in contrast, unveiled a much more conservative GPU for the Playstation 5, with 36 CUs (the same number of shaders as the RX 5700) tied to 448 GB/s memory.

Sony’s part, however, operates at a much higher 2.23 GHz max clock speed, which allowed the company to claim that it delivers over 10 TFLOP of compute, just fifteen percent behind the Xbox Series X. Reading between the lines, Sony’s claim that high clock speeds matter more than raw hardware resources implies that the performance gap between the PS5 and Xbox Series X might be even narrower than 15 percent. 

This is misleading for two key reasons. For starters, the PS5 only delivers 10 TFLOPs of notional compute power when it’s running at its maximum boost frequency. Sony themselves have asserted that clockspeeds will be pulled back depending on power draw, meaning that there will be scenarios where the PS5 delivers less. The Xbox Series X, in contrast, runs at a rock solid 1800 MHz, subject neither to thermals nor power draw.

The second reason has to do with what we already know about Navi clockspeed scaling. RDNA parts do not scale well at higher clockspeeds. Overclocking tests on the RX 5700 XT-close analogue for the PS5’s GPU-indicate that a massive 18 percent overclock from stock up to 2.1 GHz resulted in just a 5-7 percent improvement to frame rates. This is the exact opposite of Sony’s claim, which implies better-than-linear performance scaling with clockspeeds. RDNA2 is an iterative update to the first-gen RDNA architecture found in Navi 10 parts. This makes it very likely that the PS5 will also behave similarly: upping the clocks to 2.2 GHz won’t magically offset the substantial difference in hardware allocation between the Series X and the Playstation 5 GPUs. 

This leads to the sobering conclusion that in real-world workloads, the PS5 might be 30 percent or more slower than the Xbox Series X. We don’t expect the world’s fastest SSD or individual raindrop audio rendering to offset that.

Sale off - Buy Xbox One S now on Amazon
Sale off - Buy PlayStation 4 Slim now on Amazon

 

Source(s)

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

dalla rete

 

PlayStation 5: performance grafiche inferiori a Xbox Series X secondo Notebook Check

I benchmarks sui chip Navi ne darebbero prova

 

 

 

Secondo loro dunque, a causa delle performance non lineari e del clock scaling è più verosimile che il divario prestazionale tra la console next-gen di Sony e la concorrente Xbox Series X si attesterà attorno a un 25-30% a sfavore della prima e non del 10-15%, divario che potrebbe aumentare in esecuzioni a 4K.

Nel corso della presentazione della console il colosso giapponese ha sottolineato la maggiore “agilità” della sua CPU come una sorta di punto di forza. La logica da loro offerta vorrebbe che, grazie al fatto che la velocità di clock del processore è legata a dei soli shader cores, un clock più elevato garantirebbe una maggiore portata dei chip, capace di colmare il divario delle differenti risorse shader dell’hardware.

 

Microsoft, dal canto suo, giorni prima aveva rivelato l’immensa potenzialità della propria GPU, capace di generare ben 12 TFLOPs (12.155 ndr), offrendo la più performante scheda grafica mai prodotta da AMD. Dotata di 52 unità computazionali, con 3328 shader che corrono a 1800 MHz, la scheda risulta in linea con quanto visto in altri chip d’architettura Navi, come l’RX 5700. La memoria, GDDR6, offre una larghezza di banda di 561 GB/s.

 

Sony, invece, ha svelato una GPU molto più conservativa, con 36 unità computazionali e una larghezza di banda di 448 GBs. Il vantaggio sta nella maggiore velocità di clock, pari a 2.23 GHz, dettaglio che ha fatto dichiarare alla compagnia che sarebbe capace di erogare oltre 10.28 TFLOPs, attestandosi dietro la GPU concorrente di solo il 15%.

 

Leggendo tra le righe, Sony è convinta che una frequenza di clock superiore sia più importante delle pure risorse hardware e che in luce di ciò il divario tra le due console potrebbe essere addirittura inferiore al 15%.

 

Questo però è fuorviante per due motivi. In primo luogo, PlayStation 5 erogherà 10.28 TFLOPs di potenza solo quando funzionerà alla massima frequenza di boost. La stessa Sony ha ammesso che la velocità di clock sarà ridotta a seconda della richiesta energetica, il che significa che ci saranno scenari in cui la potenza erogata sarà inferiore. Xbox Series X, invece, offrirà una solida frequenza a 1800 MHz, senza oscillazioni determinate dalle esigenze termiche o energetiche.

 

La seconda ragione è insita in quanto già sappiamo dell’architettura Navi e le sue capacità di clockspeed scaling. RDNA non scala cosi bene ad alte velocità di clock. Test di overclock sulle RX 5700 XT  GPU estremamente simile per specifiche a quella montata su PlayStation 5 – indicano che un massimo overclock del 18% dalla velocità di fabbrica fino a 2.1 GHz risulta in un incremento del frame rate di soli 5-7 punti percentuali.

 

Ciò va in netto contrasto con quanto dichiarato da Sony, che implicano performance in scaling migliori con con il clockspeed. RDNA 2 è invece un aggiornamento consistente a confronto della prima generazione RDNA basata su Navi 10.

Questo rende verosimile che PlayStation 5 si comporterà analogamente: aumentarne il clock a 2.2 GHz non colmerà magicamente le sostanziali differenze nella capacità d’allocazione che corrono tra gli hardware delle due console.

 

Di fatto questa situazione porta alla sobra conclusione che, in termini di carico di lavoro reale, PlayStation 5 sarà almeno il 30% più lenta di Xbox Series X (forse anche di più). Non è plausibile che il più veloce sistema legato all’SSD possa compensare questa differenza a livello di CPU e GPU.

 

Ovviamente quanto riportato in questo articolo è frutto di una dettagliata analisi di Notebook Check, per cui non resta che attendere qualche mese e vedere se questi ragionamenti troveranno riscontro nella realtà dei fatti oppure no.

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

Xbox Series X e Mesh Shader, prestazioni simili a una RTX 2080 Ti su DX12 Ultimate75

Xbox Series X supporta i Mesh Shader e vanta in tal senso prestazioni simili a quelle di una NVIDIA RTX 2080 Ti utilizzando le DirectX 12 Ultimate.

NOTIZIA di Tommaso Pugliese   —   un'ora fa
 
 
 
 

Xbox Series X supporta i Mesh Shader, garantendo in tal senso prestazioni simili a quelle di una RTX 2080 Ti utilizzando le DirectX 12 Ultimate.

La dimostrazione arriva da Martin Fuller, che nel presentare appunto le feature delle DirectX 12 Ultimate ha mostrato un video in cui questa tecnologia viene utilizzata alla risoluzione di 1440p su NVIDIA RTX 2080 Ti e di 4K, ovverosia 2160p, su Xbox Series X.

La demo mostrata da Fuller restituisce i seguenti risultati degni di menzione: la RTX 2080 Ti renderizza la scena in circa 40 microsecondi utilizzando il metodo standard pass-through a 1440p, mentre Xbox Series X renderizza in circa 100 microsecondi a 4K nelle stesse condizioni.

Magari vi state chiedendo cosa siano i Mesh Shader e perché possono fare la differenza nell'ambito di un rendering poligonale. Ebbene, si tratta di una tecnica di ottimizzazioneche semplifica notevolmente il calcolo della distanza visiva e l'eliminazione di oggetti non visibili sullo schermo, togliendo alla CPU tale incombenza.

I risultati all'atto pratico sono chiari: l'uso dei Mesh Shader, così come del Variable Rate Shading, consente di ottenere sequenze visivamente molto più ricche impiegando però una quantità di risorse sostanzialmente inferiori rispetto al calcolo effettivo di tutti gli oggetti percepiti.

La presentazione di Martin Fuller, che riportiamo qui sotto, comincia all'incirca al minuto 23:00.

 

 

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

Xbox Series X: la differenza di potenza rispetto a PS5 è sbalorditiva, per un ex Guerrilla

c22ecd9c7b49e1d1218bc8b821652401?s=96&d=mm&r=g
di Gianluca Saitto | lunedì 23 marzo 2020 9:46

Più informazioni su

Con la presentazione delle specifiche tecniche di PlayStation 5, molti sviluppatori hanno voluto dire la propria riguardo l’infrastrutture tecnica della console di prossima generazione Sony. Molti di essi sono entusiasti, ma stando a Chris Grannell, ex game designer di Guerilla Games, il vero portento sarebbe Xbox Series X. A detta dell’ex sviluppatore di Guerrilla infatti, la nuova console Microsoft sarebbe ben più potente rispetto a PS5.

Tali dichiarazioni da parte di Grannell sono arrivate dal proprio profilo Twitter dove, rispondendo a un utente, ha affermato di aver scambiato qualche parola con diversi sviluppatori in quest’ultimo periodo. Questi ultimi gli hanno confermato che “la differenza di potenza tra le due console è abbastanza sbalorditiva a favore di Xbox Series X”. Grannell ha poi voluto aggiungere che ciò “Non significa che non ci saranno grandi giochi su PlayStation 5”.

NpHEQ6Xf_bigger.jpg

Thank God for this guy. Ppl making up shit left and right. Surprised at the ppl too.

Visualizza l'immagine su TwitterVisualizza l'immagine su Twitter
 

I’ve chatted to a few devs and they have confirmed the power difference is quite staggering. However they have said it doesn’t mean you can’t make good games on the PS5. These fanboys clearly don’t care about that and are massively rattled.

 
 
 
 

 

L'incredibile DIE di Xbox Series X

xbox-series-x-82822.768x432.jpg xbox-series-x-82814.768x432.jpg xbox-series-x-82810.768x432.jpg

Il tweet dell’ex sviluppatore di Guerrilla Games vuole riflettere sulla differenza dell’hardware delle due console di prossima generazione, cosa di cui al momento solamente pochi sviluppatori ne hanno parlato pubblicamente. Anche se mancano ancora diverse informazioni sulle console, sappiamo già che Xbox Series X ha un vantaggio significativo in termini di potenza della CPU e della GPU rispetto a PlayStation 5.

 

Ciò non significa che la nuova console di Sony non sia affatto potente. La compagnia giapponese sta spingendo molto sulla propria tecnologia SSD. Tuttavia, i vantaggi di Xbox Series X hanno benefici tangibili molti degli aspetti tecnici dei giochi, mentre questi aspetto di PS5 devono ancora essere mostrati del tutto. Cosa ne pensate delle recenti dichiarazioni di un ex membro di Guerrilla Games?

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

 

 

 

VR: Valve, Microsoft e HP stanno collaborando a un nuovo visore next gen16

Valve, Microsoft e HP stanno lavorando insieme a un nuovo visore VR che viene definito "next gen", destinato a rappresentare il nuovo e futuro standard per la realtà virtuale.

NOTIZIA di Giorgio Melani   —   un'ora fa
 
 
 
 

Valve, Microsoft e HP stanno collaborando alla creazione di un nuovo visore a realtà virtuale che viene considerato "next gen", ovvero di nuova generazione, dotato di maggiore potenza e qualità dell'ottica, che è già visibile su Steam con una pagina dedicata.

Chiamato al momento soltanto "Next Gen HP VR Headset", il dispositivo ha anche un sito da parte di HP, che evidentemente è la compagnia che svolgerà il grosso dei lavori in termini di assemblaggio hardware, dal quale però non è possibile ancora ricavare grosse informazioni.

"Sviluppato in collaborazione con Valve e Microsoft, il visore VR di nuova generazione consente delle esperienze più immedesimanti, comode e compatibili rispetto a quelli della generazione precedente", si legge nella descrizione ufficiale, "è il nuovo standard nella realtà virtuale", nientemeno.

C'è anche una vaga immagine visibile, con il visore che si intravede nella penombra e il logo HP visibile sulla parte frontale. In ogni caso, si tratta di un prodotto anche in fase di sviluppo, dunque ci vorrà del tempo prima di vederlo sul mercato.

Per avere informazioni aggiornate è comunque possibile iscriversi alla newsletter sul sito ufficiale. Valve è ovviamente molto impegnata sul fronte VR, con il proprio Valve Index e con il recente rilascio di Half-Life: Alyx, che pare essere un capolavoro in base anche ai primi voti raccolti fin qui.

Interessante il coinvolgimento diretto di Microsoft, che continua a dimostrarsi impegnata nello sviluppo attivo di soluzioni software e hardware per la realtà virtuale ma che fin qui non ha fatto grossi passi in avanti per l'integrazione di questa nelle sue console. Considerando la potenza di Xbox Series X, tuttavia, è probabile che un'apertura al supporto per la realtà virtuale avvenga e in tal caso questo visore next gen di HP potrebbe essere direttamente coinvolto.

 

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti
INFORMAZIONI GIOCO
  • 11
  •  
  •  
  •  
  •  
  •  
  •  
 
Di 

Riccardo Arioli Ruelli

 
 
25 Marzo 2020, Ore 19:40

Un recente video di Martin Fuller per il DirectX Developer Day approfondisce l'utilizzo dei Mesh Shader per la prossima generazione di GPU, permettendo di dare un primo interessante sguardo al cuore grafico di Xbox Series X, basato su RDNA2 di AMD.

Nel filmato vengono infatti analizzati gli utilizzi della nuova tecnologia Mesh Shader, confrontando un PC equipaggiato con una NVIDIA GeForce RTX 2080 Ti e un devkit di Xbox Series X (ben visibile). Il drago mostrato a video viene renderizzato su Xbox Series X senza algoritmi in 102 microsecondi, che scendono a 53 microsecondi con Mesh Shader, il tutto in risoluzione 4K. Allo stesso modo il PC con RTX 2080 Ti riesce a renderizzare in 72 microsecondi che crollano a 32 microsecondi con Mesh Shader attivo, in questo caso alla risoluzione di 1440p.

Tenuto conto che l'ammiraglia di NVIDIA lavora ad una risoluzione più bassa e considerando che l'algoritmo alla base di Mesh Shader potrebbe ancora essere ottimizzato nel tempo, i dati mostrano una potenza di calcolo paragonabile tra le due piattaforme, in particolar modo con la nuova tecnologia di rendering attiva. Senza farsi trascinare troppo, la nuova architettura RDNA2 di AMD potrebbe finalmente dare del filo da torcere alla rivale NVIDIA, regalando, nel caso specifico, un'esperienza next-gen degna di nota.

Prima di lasciarvi al filmato, vi ricordiamo che sulle pagine di Everyeye trovate un approfondimento sui Mesh Shader, la tecnologia di renderizzazione alla base di Xbox Series X.

FONTE:PC GAMER
 

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

Xbox Series X, Microsoft sta lavorando per supportare le OpenGL 3.3 e OpenCL 1.211

Microsoft sta lavorando per portate le OpenGL 3.3 e OpenCL 1.2 su Windows e tutti i dispositivi compatibili con DirectX 12, come Xbox Series X.

NOTIZIA di Luca Forte   —   5 ore fa
 
 
 
 

Microsoft sta lavorando per portate le librerie OpenGL 3.3 e OpenCL 1.2 su Windows e tutti i dispositivi compatibili con le DirectX 12, come Xbox Series X.

Una mossa pensata con tutta probabilità per semplificare le operazioni di conversione di tutti quei software e quelle tecnologie open source non pensate nativamente per funzionare col sistema operativo di Microsoft o con tutti i device compatibili con DirectX 12. Per questo motivo pensiamo che Xbox Series X, la prossima console di Microsoft, sia coinvolta nel progetto: le DirectX 12 Ultimate e DXR 1.1, infatti, sono la nuova base di partenza per il ray tracing su PC e Xbox.

Secondo Collabora, una società specializzata nel comunicare i benefici del software open source, questa "traduzione" serve per rendere il supporto di OpenCL e OpenGL su tutti i dispositivi Windows più immediato. In questo modo un produttore di GPU deve solamente implementare un driver D3D12 per far sì che il suo hardware sia compatibile con tutte e tre le API. Inoltre questo consentirà di portare molto più facilmente le vecchie applicazioni OpenCL e OpenGL in D3D12.

Semplificando in questo modo l'arrivo di applicazioni scritte utilizzando queste istruzioni open source su dispositivi più moderni. Maggiori informazioni possono essere trovate a questo indirizzo.

Non è resa esplicita, quindi, la compatibilità con Xbox Series X, ma trattandosi di un progetto universale con tutti i dispositivi Windows e DirectX 12, la nuova console di Microsoft non dovrebbe essere esclusa.

 ORA LIVENASCONDI 
Stasera Gioco Io: Half Life: Alyx con Francesco

 

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

Con la carenza di soldi che ci sarà dopo questa pandemia, se non abbassano i prezzi, faranno flop di vendite sia PS5 che Xbox.. io la console Sony la comprerò ma solo quando avrà un prezzo decente.

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

 

Xbox Series X: un hacker ha rubato documenti sulla GPU della console e di altre schede AMD25

Xbox Series X si ritrova anche al centro di un furto, con un hacker che ha rubato documenti sulla GPU della console e di altre schede AMD, chiedendo un riscatto alla compagnia.

NOTIZIA di Giorgio Melani   —   25/03/2020
 
 
 
 

Un hacker ha rubato file relativi alla GPU di Xbox Series X e di altre schede grafiche in arrivo da parte di AMD, a quanto pare un consistente set di "test file" relativi ai nuovi prodotti AMD, tra i quali anche la scheda della nuova console Microsoft.

La conferma arriva dalla stessa AMD in un comunicato stampa, almeno per quanto riguarda il fatto di essere stati "contattati da qualcuno che sostiene di aver sottratto dei test file relativi a una serie di nostri prodotti attuali e futuri". La compagnia sembra dunque confermare l'avvenuto furto, consapevole anche che l'hacker potrebbe possedere anche altri file rispetto a quelli resi noti, ma sostiene di non ritenere la questione un pericolo per la sicurezza o la competitività dei prodotti AMD.

In base a quanto riferito da Engadget, sembra che il materiale trafugato comprenda il codice sorgente di Navi 10, della futura Navi 21 e della GPU Arden di Xbox Series X. L'hacker ha contattato AMD chiedendo 100 milioni di dollari per riottenere i documenti sottratti, altrimenti ha minacciato di rendere pubblici i file in questione.

I fatti sembrano risalire allo scorso dicembre e non è chiaro perché la questione stia emergendo adesso, non sembra che AMD abbia preso la cosa particolarmente a cuore. In ogni caso, nel comunicato viene specificato che la compagnia sta lavorando con le autorità e altri esperti per portare avanti le indagini sul caso.

La faccenda non dovrebbe avere riflessi particolari sugli utenti futuri di Xbox Series X, a cui interessano al momento ben altre cose, come le questioni sul prezzo, che potrebbe essere meno di 500$ e sulla possibile data d'uscita della nuova console.

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

Mi è venuta voglia di giocare a qualche videogame, solo che avendo il pc di 5-6 anni fa i titoli più recenti non girano. Non essendo un esperto di computer suppongo che dovrei cambiare scheda video e altro ma non ne sarei capace. Considerando che almeno 200 euro dovrei spenderli comunque e alla prossima uscita magari ricambiare i componenti non sarebbe meglio comprare una ps o una xbox? E' vero che potrei avere problemi di compatibilità in futuro, ma penso che per i titoli a cui gioco io (FIFA, one piece, giochi di auto e pochi altri) verranno pubblicati anche per le console meno recenti

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti
Mi è venuta voglia di giocare a qualche videogame, solo che avendo il pc di 5-6 anni fa i titoli più recenti non girano. Non essendo un esperto di computer suppongo che dovrei cambiare scheda video e altro ma non ne sarei capace. Considerando che almeno 200 euro dovrei spenderli comunque e alla prossima uscita magari ricambiare i componenti non sarebbe meglio comprare una ps o una xbox? E' vero che potrei avere problemi di compatibilità in futuro, ma penso che per i titoli a cui gioco io (FIFA, one piece, giochi di auto e pochi altri) verranno pubblicati anche per le console meno recenti
Sì, non avrai problemi per almeno un paio di anni.. tieni conto che fino a un po' di tempo fa facevano ancora Fifa e Pes per la PS3..

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti
 
 

XCloud girerá su Amazon Fire Stick

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

Stasera tra le 22:00 e la mezzanotte (est) Microsoft finalizzerà l'acquisto dell'intero catalogo di giochi da Konami. L'accordo a cui hanno lavorato è così costoso per Microsoft che finanzierà da solo una nuova struttura produttiva che Konami ha voluto costruire per concentrarsi sulla nuova ricerca e sviluppo per i dispositivi di gioco. Microsoft dovrà continuare a pagare i canoni a Konami su tutti i giochi originariamente rilasciati da Konami, compresi i rimasterizzatori futuri. Solo i nuovi giochi, realizzati da zero, saranno completamente redditizi per Microsoft. Konami manterrà inoltre i diritti per la creazione di nuovo hardware di gioco basato su qualsiasi gioco a tema precedentemente per adulti (classificato M) che ha rilasciato. Nell'accordo Microsoft sarà in grado di selezionare un team da potenziali dipendenti licenziati dopo la vendita per aiutare a fondare uno studio a Chou City. Konami consentirà l'attuale "Xbox Games Studios Tokyo" come ufficio satellite per 12-18 mesi al secondo piano della loro sede principale a Chou City, mentre Microsoft finalizza i suoi piani per trasferirsi o costruire il proprio spazio studio.  

Condividi questo messaggio


Link di questo messaggio
Condividi su altri siti

×

Informazione Importante

Utilizziamo i cookie per migliorare questo sito web. Puoi regolare le tue impostazioni cookie o proseguire per confermare il tuo consenso.