To get a better sense of how and why FidelityFX Super Resolution 2.0 was developed, and where the technology will go from here, we spoke to the head of the FSR 2.0 project, Nicolas Thibieroz, whose full title is director of game engineering at AMD. We last spoke with Nicolas back in 2021 after FSR 1.0 launched, leaving us plenty of recent developments - and future plans - to talk about. This interview was conducted online and recorded on video, but a written transcript is available below. Note that both questions and answers have been edited lightly for readability. Enjoy! Let’s start by getting into the motivations behind FSR 2.0. AMD released FSR 1.0 almost a year ago as an image space scaling solution. Building upon that, what did the team want to achieve with FSR 2.0 - and why did they want FSR 2.0? Nicolas Thibieroz: That’s a great question. So let’s spend a bit of time talking about FSR 1.0, right? As you said, the tech was released more than a year ago and we’ve seen really good adoption by game developers and clearly we’ve been super happy with the success and got a lot of praise about many aspects of the technology, such as the quality, performance trade-offs, the ease of integration, cross-platform support and of course its open nature. So it’s interesting because we could have chosen to build a super resolution tech for higher-end GPUs only. And in a way it would have been easier, right? But it did not quite fit with what our developer partners and our customers wanted. So we at AMD believe that super resolution is a tech that should be accessible to everyone, regardless of the GPU vendor, performance, or accessibility, and looking back, I think we made the right decision. There’s more than 80 games enabled with FSR 1.0 in the first year alone and that means players can enjoy higher performance on whatever device [they’re playing on]. So essentially that means FSR 1.0 was our first solution to democratise access to upscaling technology, but we knew we were not going to stop there. So in terms of motivation, I would say that improving quality relative to source resolution was the main aspect that we wanted to go after. FSR 1.0 looks great in higher quality modes, but clearly it starts losing some steam at higher scale ratios and we want to improve on this. Real-time upscaling is an active area of research for AMD and temporal upscaling across frames was the next natural step for us to look into. So that solution, which would allow us to use samples from previous frames to reconstruct the current frame, was the logical way to go about improving the quality. At the same time, we did not want to lose any of the values that made FSR 1.0 successful. Again: cross-platform, ease of integration, and open source. So providing a high quality upscaling solution, while retaining the pillars that game developers have come to expect from us, was the main motivation behind the development of FSR 2.0. So you chose to use temporal supersampling essentially for FSR 2.0, temporal reconstruction. We’ve seen temporal techniques for a long time, dating back to Halo Reach on Xbox 360… obviously it’s improved and changed dramatically since then… but the problem of ghosting artefacts has remained. How does FSR 2.0 approach that to try and eliminate ghosting as much as possible? Nicolas Thibieroz: Yeah, good question as well. Let me just spend a bit of time on TAA because I think temporal anti-aliasing is actually a great example of how the game development industry shares and collaborates. Because we’ve seen, as you pointed out, that TAA is now pretty ubiquitous among games today. So with TAA being a building block for temporal upscaling, I think it was important for AMD to give back to the community by releasing and documenting our own efforts in this area, which we have done with by releasing FSR 2.0 under the GPU Open banner. Regarding ghosting - by definition I would say that any temporal upscaling technique is going to rely on pixel information from previous frames, leveraging what we call motion vectors, to determine where current pixels used to be in the previous frame. Now unfortunately, in games pixels tend to move a lot, so you can’t just rely on motion vectors alone. If you’re playing a game like a first person game and, say, you open a new door and there’s obviously some new stuff beyond that door, well, those pixels behind the door don’t have any history from previous frames. That’s what we call disocclusion. So in those cases, the temporal algorithm needs to find some other ways to infer detail on how to upscale those particular pixels. There are some documented techniques on how to deal with those limitations, including looking at neighbourhood pixels and clamping them to prevent reprojecting incorrect colours and therefore avoiding ghosting. In fact, I would say how history and how current pixels are combined together is actually a major part of how well temporal algorithms work. The secret sauce, if you will. Here I would say FSR 2.0 uses variations between existing and new methods to limit the visual impact of ghosting artifacts. So we’ve got a bunch of technical details on this that I would recommend anyone to read. Like the FSR 2.0 documentation I think is very good on that in that respect, or even our GDC 2022 presentation for the full spiel. But in any case, I think it’s fair to say we expect to improve ghosting artifacts in the future. And FSR 2.0 will get better and better. We know that games are made up of many passes and temporal anti-aliasing in the past has done well with opaque stuff, but things that are transparent, animated textures, surfaces that change in a way that is not really expected, has always caused problems for TAA. So what was AMD’s approach here, to ensure the upscaling stayed high quality even for screen elements? Nicolas Thibieroz: So I think your question hits the nail on the head. Games are very complicated beasts, right? They are made up of lots of different types of rendering. And that means for high quality upscaling, FSR 2.0 has to be great at everything. But I think it’s fair to say that transparency in particular can be challenging with temporal upscalers. And that’s because those translucent pixels typically don’t have motion vectors or even depth information. So you have to rely purely on colour information to decide how to upscale those. Now FSR 2.0 is equipped to handle a large variety of the more difficult cases out of the box. We’ve got something like a detection mechanism for shading changes and it looks for variations in the image that don’t have a corresponding change in geometry. We do have that still to take care of these situations. However, we are all about giving developers maximum control. And to that end, we expose a mask. This mask, which that we call the reactive mask, allows developers to essentially optimise the quality of their FSR 2.0 integration by tagging transparent pixels on the screen. With that information, FSR 2.0 is able to produce a better upscaled image by balancing the historical data with the transplant colours in the current frame. In Deathloop… I found that the transparency issues, ghosting, that we normally see in games with TAA wasn’t really present. We’ve only look at two [FSR 2.0] titles so far and these things can obviously change on a per title basis. We looked at technically what is considered a non-standard title, because officially FSR 2.0 through GPU Open supports Vulkan and DX12. Legacy API support [such as DX11 and OpenGL] is really not there unless [the developer] works directly with [AMD]… and that’s what happened with God of War. We noticed there that when pixels were disoccluded, for a couple of frames afterwards in that area there would be the equivalent of the resolution kind of jittering. I called this jittering ‘disocclusion fizzle’ in the video I made on FSR 2.0 in God of War. Is this expected behavior based upon FSR 2.0 working principles? Is it by design or is it stylistic? Is this something that is going to be changeable in future iterations of FSR 2.0? Nicolas Thibieroz: I don’t think I’ve got a yes or no answer for you, unfortunately. What I would say though is upscaling is fundamentally a complex problem. You’re trying to reconstruct a high quality signal from essentially a very limited set of inputs. So realistically, this process means that any temporal upscaling technology will be subject to minor image quality issues, and those will be typically more visible at higher scale factors [eg performance modes] or, of course, when the game is in motion. They also change… on a per game basis, but often they will not be visible or at least be less visible at normal viewing distances and high frame rates, especially if you’re targeting 4K resolution. Having said that, AMD is definitely invested in continuing to improve FSR 2.0 in many of its aspects with image quality leading the charge. Our number one priority here is the player experience with FSR 2.0. And that means we’ll optimise the algorithm based on typical playing conditions. But again, I expect FSR 2.0 to be better, to get better and better. Even in the meantime… people are already doing really incredible things with FSR 2.0… Darío Samo, whose content we’ve featured before, worked on the Super Mario 64 PC ray tracing port [with] DLSS 2.0 and he immediately picked up and added FSR 2.0. Another thing that we’ve seen which went beyond the realm of imagination was that people have been taking DLSS titles which have been released over the last two years and putting AMD’s FSR 2.0 into them, hijacking the .DLL framework. Was this by design? Did you expect this? What is your reaction and what comments do you have about this phenomenon of people putting FSR 2.0 into retail games that did not have it before? Nicolas Thibieroz: I think what we are seeing here is honestly the power of open source. I mean, as far as AMD is concerned, open source is an invitation for developers to join us in innovating. And I think we are seeing this first hand with all the projects you mentioned. So I think it’s fair to say we did expect to see creative use of FSR 2.0, but certainly not disruptively after release. So based on this and the reception from all the professional game devs that we work with, I think this speaks a lot to the quality of our code and the documentation, which is something that we’re actually particularly proud of. So yeah, there’s a rich history of modding in this industry, right? And you know, we are very happy to support the wider development communities via all GPU Open initiatives. But of course… official game integration of FSR 2.0 will always be preferable to a mod. Not only from a pure quality perspective, but also in terms of ease of use for all players. Because obviously the option is directly in the game as opposed to having to fiddle with .DLLs and whatnot. FSR 2.0 mods for Cyberpunk 2077 and Dying Light 2 have real promise… and rough edges. Getting back to the idea of an interoperability here with DLSS to a certain degree: Nvidia detailed Streamline in April of this year and they released some code on GitHub. The idea was essentially [that] super resolution tech is now here on PC, all the vendors have [developed] their own solution, Intel included. The initiative was basically to create a common API platform, a plug-in interface for developers to use to just make it so that if you have an Intel GPU, you can run Intel XeSS, if you have a Nvidia GPU, well then the developer has an easier way to put in DLSS. And same for AMD’s FSR 2.0, hopefully. I was just curious whether AMD wants to support such an initiative in the future? Nicolas Thibieroz: I’m going to be direct with you here. We don’t plan to support Streamline at this time. Right now, obviously, while we believe that focusing on open source technologies is the best approach for gamers and game developers, inherently, we don’t believe that Streamline provides any significant benefits… beyond what is currently available - and essentially the underlying Nvidia technologies like DLSS that plug into it, well they’re still closed and proprietary. So you’re talking about having an open source framework that plugs into a closed technology, right? So if I were to contrast this with FSR 2.0, obviously it’s fully open source, easy to implement and supported on multiple platforms, including consoles, which I think is actually key to that particular topic. So there is no need for developers to learn and implement a new framework for something that they can already do easily today. When developing FSR 2.0, you were taking the millisecond frame time cost of it into consideration with every step you did and it was probably a huge part of the optimidation process. You have different paths, essentially, for different GPUs. What about FSR 2.0 on Xbox for example, does that have specific optimisations in it made specifically for the AMD GPU in the Xbox Series X? Nicolas Thibieroz: So that’s a very yes or no answer. Yes, absolutely. So we think this is specifically optimised for each platform and that includes optimising for the Xbox. Absolutely. Any details into what that may entail to a certain degree, like, for example, in comparison to the path that’s used for like a modern RDNA 2 chip on PC? Nicolas Thibieroz: Yes, yes. You have to remember that [with] GPUs you have got different performance characteristics like the ratio resources in the system in terms of texture instructions, math instructions may be different, and so on. So these are the type of examples that basically would warrant, potentially, a different code path. But I don’t know the exact details, to be honest with you, as to what exactly we did to make Xbox faster. But I do know for a fact that that we did do some things to try to get as close to 2ms as we could on the Series X. So FSR 2.0 has been out for a while. It is going to proliferate. Developers are using it. It’s already been announced in upcoming titles. But beyond the proliferation stage here, how would you personally like to see FSR 2.0 advance, algorithmically and otherwise, in light of the fact that AMD soon enough is going to be releasing new, probably dramatically more powerful GPUs? Are you just interested in that millisecond runtime getting lower and FSR 2.0 becoming faster? Or are you interested in using those newer, faster GPUs to make FSR 2.0 different and better in some way? Nicolas Thibieroz: I think it’s a good question. Yeah, I would say we’ve got a pretty vast roadmap for upscaling research and development. As you pointed out earlier, we released FSR 1.0 last year and we’ve just released FSR 2.0 last month. So the teams remain focused on improving our current solutions and in particular acting on all the feedback we’re getting from our developer partners. That’s very important because we want to elevate the experience for everybody. So again, FSR 2.0 will get better and better. Now while GPUs will undoubtedly get faster, I’m afraid I can’t comment on any new tech we may be developing to take advantage of these increased capabilities. What I would however say is that if you look at our history, we’ve got a history of ensuring that the developer software we release can scale to a wide range of platforms. I think that is something that is close to our heart. So without pre-announcing anything, I would say that this is something that we’d want to maintain. Okay, so it’ll get better. It’ll be faster. But one thing I’ve always noticed about image reconstruction techniques of all types, ever since they started, is that they’re really good at normal, rasterised geometry edges edges and textures. They can upsample those really well and make them look really crisp, clean and well anti-aliased - but aspects of rendering that are decoupled from that, like ray traced reflections, usually still look pretty low resolution while upscaling; they look more like the internal resolution. Is AMD interested at all in advancing FSR 2.0 in the direction of making RT also look better? Nicolas Thibieroz: I think temporal upscaling techniques such as FSR 2.0 can actually work with the ray tracing effects as long as the rays traced from the camera view are jittered in the same way as rasterised geometry is jittered [for FSR 2.0]. However, to your point, it is true that some effects such as reflections, whether they are ray traced or not, can actually be challenging with temporal upscaling algorithms. And that’s because, again, the reflected pixels are not correlated to the [frame] history because they don’t have motion vectors or depth. FSR 2.0 is already able to support some of those cases via a special mask available to game developers to improve the quality of the final image. That mask is different from the one I mentioned earlier. This one is called the transparency and composition mask. Essentially what it does is to tell FSR 2.0 to adjust the contribution of any ’lock’ that exists for the pixel. For the full definition of ’locks’ we invite your audience to maybe check up on the documentation because we’re getting into complexity after that. Now, we’ve just talked about a bit where FSR 2.0 might be advancing the future. Machine learning is not a part of FSR 2.0 right now. I think it’s been suitably proven at this point that ML has a certain level of utility for this kind of thing. And the question is, do you think you would be looking towards using machine learning in any capacity for FSR 2.0 in the future, whether that be in the distant future or closer? Nicolas Thibieroz: I think there is no doubt that machine learning is a great tool. I mean, you feed a bunch of inputs and outputs to a machine learning framework so that it can learn the relationships between them, and then once the model is learned, it can be applied to any new inputs. In practice, that is great, right? And it can be applied to a range of problems. And obviously, according to public Nvidia documentation, it is being leveraged by DLSS to reconstruct an image from available inputs. So you can think of ML as a brute force approach to engineering an algorithm, but certainly it is an option. In the case of FSR 2.0 though, we wanted to provide a quality upscaling solution that could run on a wide range of hardware so we could not really rely on any dedicated machine learning acceleration. It was not an option for us. And to be fair, we actually are pretty happy with where we ended up with the hand-crafted algorithms we designed. And now that we know the algorithm inside out, it will allow us to keep improving it in the future. So FSR 2.0 will definitely be evolving. As to whether a future ML based upscaling solution may be released by AMD. You know what’s coming, I’m afraid I can’t comment on that. One thing we don’t often have the chance to do is to talk with developers like you who are making this kind of tech, and especially [to someone] who is so integral to the project. So what was one aspect of working on FSR 2.0 that you are really proud of, or that you were really satisfied with [at the time]? Because the success is there… what does this mean to you? Nicolas Thibieroz: I’m actually very grateful you asked that question, because it gives me a chance to talk about something that is very dear to me. So in my role as senior director of game engineering at AMD, I get to oversee all aspects of AMD’s technical relationship with game developers worldwide. And FSR 2.0 is certainly a major piece in that puzzle. However, for me, the thing that I’m the most proud of is really the amazing group of people who made it all possible. These are the people you often don’t see, working hard to invent and engineer all those solutions behind the scenes. I actually believe we’ve got some of the brightest in industry putting their heart and soul into technology such as FSR 2.0. As a director of this group, I get to have a front row seat to watch the creativity and dedication that it takes to pull off something like FSR 2.0. And believe me, it was hard. We had significantly difficult gates in terms of quality and performance and other metrics that we had to pass. It really took a concerted efforts to get to where we are. So it’s genuinely wonderful to get to work with those people day in, day out. And I’m really, really proud of what we achieved together as a team. That’s wonderful to hear. You know, teams are so important, like my co-workers and friends at Digital Foundry are also very important to me and something that hopefully we communicate in our videos. It’s not just about tech. It’s sometimes about the experience of loving tech and loving the technology behind everything… Here’s just a little ‘bonus round’ question based on what you said - the struggle to make [thresholds] of quality and performance. Do you remember any kind of breakthrough moment or ‘aha!’ moment in the FSR 2.0 project, regarding one of the technical aspects of FSR? Nicolas Thibieroz: Actually, there are a few of those, certainly from a quality perspective. Once we saw that we had image quality comparable to DLSS on much of the content, that was clearly, very, very good news for everybody involved. But then obviously we had a bunch of technologies packed into that algorithm and it was not as fast as we wanted. So hence all of the effort we put into optimising for different platforms. So again, once we got breakthroughs in terms of finding new ways to optimise, that allowed us to go below a certain thresholds in terms of milliseconds, yes, I think there was a lot of joy. Definitely these kind of moments really made it worthwhile. Yeah, of course. The way I round out these interviews is to ask if there are any final words you would like to share regarding FSR 2.0, your job or anything, really? Nicolas Thibieroz: I think, again, I’m pretty grateful for that last question because I got to talk about the team of people behind the scenes that do the real work, as opposed to me being in front of you and talking about it. But yeah, I’m very grateful for this opportunity to talk to you guys and, you know, looking forward to seeing what Digital Foundry is up to next. Thank you so much. Thanks for Nicolas for his time and AMD’s PR team for arranging the interview. If you have any follow-up questions for AMD, let us know below - perhaps if we get enough of them, we can do another FSR interview at some stage!