I struggle with UE over others for any project that doesn't demand an HDRP equivalent and nanometric mesh resolution. Unity isn't exactly a walk in the park either but the iteration speed tends to be much higher if you aren't a AAA wizard with an entire army at your disposal. I've never once had a UE project on my machine that made me feel I was on a happy path.
Godot and Unity are like cheating by comparison. ~Instant play mode and trivial debugging experience makes a huge difference for solo and small teams. Any experienced .NET developer can become productive on a Unity project in <1 day with reasonable mentorship. The best strategy I had for UE was to just use blueprints, but this is really bad at source control and code review time.
Any thoughts on Verse? I’m not experienced with Unreal or in the ecosystem, but it looked like it might be too foreign to me. But Tim Sweeney is no dummy, so it’s probably good and just requires some effort if you’re not already a functional programming nerd?
I absolutely love both Unreal and Unity. Unreal is amazing from a technical perspective and having worked with a talented team in Unreal the stuff we were able to make were mind-blowing given the resources we had.
Unity is way easier to work with if you aren't focused on high fidelity graphics. In fact, I've never tried any engine that felt as easy to work with as Unity. I would absolutely not call it a monster. Even with a fairly green team you can hit the ground running and get really productive with Unity.
So yeah, Unity is an easy on-ramp. But unfortunately, I think it puts people in a bad market that doesn't serve them well.
Game engines have always been a problem. They're very tricky to make and cover everyone's use cases, and I don't think they've ever been in as good a state as right now.
I'm also always surprised we don't see more games on the current gen id software engines
We're a team with < 10 employees. He's paying very handsomely, so even if his Unreal foray is an absolute disaster, I'll have the savings to find something else.
With a bit of experience you can achieve global illumination results that are competitive with Pixar films by using static scene elements, URP, area lighting, baked GI and 5~10 minutes on a 5700XT. The resulting output will run at hundreds of FPS on most platform targets. If this means pegging vsync, it may also be representative of a power savings on those platforms.
Lights in video games certainly use real electricity, but the power spent on baked lights is amortized across every unique target that runs the game. The biggest advantage of baking is that you can use an unlimited # of lights. Emulation of a physical scene is possible. There are also types of lights that aren't even accessible at real-time (area/volumetric). These produce the most compelling visual results, avoiding problems that others create such as hotspots in reflection probes and hard shadowing.
Lightmap baking is quickly becoming a lost art because realtime lighting is so simple by comparison (at first!). It also handles a lot of edges cases automagically. The most important ones being things like dynamic scene elements and foliage. Approximately half of the editor overlays in Unity are dedicated to visualizing the various aspects of baked lighting. It is one of the more difficult things to polish but if you have the discipline to do so it will make your game highly competitive in the AAA arena.
The crazy thing to me about baked GI is that it used to be incredibly crippling on iteration time. Working at a studio back in 2014 I recall wild schemes to bake lights in AWS so we could iterate faster. Each scene would take hours to fully bake. Today, you can iterate global GI in a fixed viewport multiple times per second with a progressive GPU light mapper. Each scene can be fully baked in <10 minutes. There has never been a better time to build games using technology like this. If I took a game studio from a decade ago and gave them the technology we have today, they would wipe the floor with every other studio on earth right now.
This tech doesn't have to be all-or-nothing either. Most well engineered AAA games utilize a mixture of baked & real time. The key is to make as many lights baked as possible, to the extent that you are kind of a constraining asshole about it, even though you can support 8+ dynamic lights per scene object. I look at real time lighting as a bandaid, not a solution.
If you want to attack this from a business perspective - Bleeding edge lighting tech is a nightmare if you want to ship to a large # of customers on a wide range of platforms.
OBJECTIVE: Any project that demands HDRP and Nanometric Mesh
BONUS: Find the happy path
A set of libraries on our code had hit 20% of response time through years of accretion. A couple months to cut that in half, no architectural or cache changes. Just about the largest and definitely the most cost effective initiative we completed on that team.
Looking at flame charts is only step one. You also need to look at invocation counts, for things that seem to be getting called far more often than they should be. Profiling tools frequently (dare I say consistently) misattribute costs of functions due to pressures on the CPU subsystems. And most of the times I’ve found optimizations that were substantially larger improvements than expected, it’s been from cumulative call count, not run time.
Why has it been changed? The number of tooltips improves the title and is accurate to the post.
Similarly, adding a modal like this
{isOpen && <Modal isOpen={isOpen} onClose={onClose} />}
instead of
<Modal isOpen={isOpen} onClose={onClose} />
Seems to make the app smoother the more models we had. Rendering the UI (not downloading the code, this is still part of the bundle) only when you need it seems to be a low hanging fruit for optimizing performance.
You basically have a global part of the component and a local part. The global part is what actually gets rendered when necessary and manages current state, the local part defines what content will be rendered inside the global part for a particular trigger and interacts with the global part when a trigger condition happens (eg hover timeout for a tooltip).
This is, in general, the idea that is being solved by native interaction with the DOM. It stores the graphic, so it doesn't have to be re-instated every time. Gets hidden with "display:none" or something. When it needs to display something, just the content gets swapped and the object gets 'unhidden'.
Good luck.
Excessive nodes - hidden or not - cost memory. On midrange Android it’s scarce and even if you’re not pushing against overall device memory limit, the system is more likely to kill your tab in the background if you’ve got a lot going on.
https://developer.mozilla.org/en-US/docs/Web/CSS/@starting-s...
And the “react way” is to have the UI reflect state. If the state says the modal is not being rendered, it should not be rendered
The tradeoff is for more complicated components, first renders can be slower.
It took the whole afternoon
It's no wonder UE5 games have the reputation of being poorly optimized, you need an insane machine only just to run the editor..
State of the art graphics pipeline, but webdev level of bloat when it comes to software.. I'd even argue electron is a smoother experience tan Unreal Engine Editor
Insanity
Wouldn't it taking the whole afternoon be because it's downloading and installing assets, creating caches, indexing, etc?
Like with IDEs, it really doesn't matter much once they're up and running, and the performance of the product has ultimately little to do with the tools used in making them. Poorly optimized games have the reputation of being poorly optimized, that's rarely down to the engine. Maybe the complete package, where it's too easy to just use and plop down assets from the internets without tweaking for performance or having a performance budget per scene.
To get UE games that run well you either need your own engine team to optimise it or you drop all fancy new features.
I am kinda sad we have reached point where native resolution is not the standard for high mid tier/low high tier GPUs. Surely games should run natively at non-4k resolution on my 700€+ GPU...
And now antialiasing is so good you can start from lower resolutions and still fake even higher quality
By Games I mean modern AAA first or third person games. 2D and others will often run at full resolution all the time.
New monitors default to 60hz but folks looking to game are convinced by ads that the only reason they lost that last round was not because of the SBMM algorithm, but because the other player undoubtedly had a 240hz 4K monitor rendering the player coming around the corner a tick faster.
Competitive gaming and Twitch are what pushed the current priorities, and the hardware makers were only too happy to oblige.
Care to exemplify?
I find UE games to be not only the most optimized, but also capable of running everywhere. Take X-COM, which I can play on my 14 year old linux laptop with i915 excuse-for-a-gfx-card, whereas Unity stuff doesn't work here, and on my Windows gaming rig always makes everything red-hot without even approaching the quality and fidelity of UE games.
To me UE is like SolidWorks, whereas Unity is like FreeCAD... Which I guess is actually very close to what the differences are :-)
Or is this "reputation of being poorly optimized" only specific to UE version 5 (as compared to older versions of UE, perhaps)?
It also has a terrible reputation because a bunch of the visual effects have a hard dependency on temporal anti-aliasing, which is a form of AA which typically results in a blurry-looking picture with ghosting as soon as anything is moving.
"Firstly, despite its name, the function doesn’t just set the text of a tooltip; it spawns a full tooltip widget, including sub-widgets to display and layout the text, as well as some helper objects. This is not ideal from a performance point of view. The other problem? Unreal does this for every tooltip in the entire editor, and there are a lot of tooltips in Unreal. In fact, up to version 5.6, the text for all the tooltips alone took up around 1 GB of storage space."
But I assume the 1GB storage for all tooltips include boilerplate. I doubt it is 1 GB of raw text.
The user can ever only see one single tooltip. (Or maybe more if you have tooltips for tooltips but I don't think Unreal has that, point is, a limited number.)
So initialize a single tooltip object. When the users mouses over an element with an tooltip, set the appropriate text, move the tooltip widget to the right position and show it. If the user moves away, hide it.
Simple and takes nearly no memory. Seems like some people still suffer from 90s OOP brain rot.
(Depending on your IMGUI API you might be setting tooltip text in advance as a constant on every visible control, but that's probably a lot fewer than 38000 controls, I'd hope.)
It's interesting that every control previously had its own dedicated tooltip component, instead of having all controls share a single system wide tooltip. I'm curious why they designed it that way.
With immediate mode you don't have to construct any widgets or objects. You just render them via code every frame which gives you more freedom in how you tackle each UI element. You're not forced into one widget system across the entire application. For example, if you detect your tooltip code is slow you could memcpy all the strings in a block of memory and then have tooltips use an index to that memory, or have them load on demand from disk, or the cloud or space or whatever. The point being you can optimise the UI piecemeal.
Immediate mode has its own challenges but I do find it interesting to at least see how the different approaches would tackle the problem
AFAIK, UE relies on a retained mode GUI, but I never got far enough into that version of Narnia to experience it first hand.
It seems like those libraries do what IMGUI do, but more structured.
React requires knowing about your state because it wants to monitor all of it for changes to try to optimize not doing things if nothing changed. This ends up infecting every part of your code to do so. It's the number 1 frustration I have using React. I haven't used Flutter or SwiftUI so I don't know if they are analogus
I like Godot primarily because of GDScript; you are not compiling anything so iteration time is greatly reduced. Unigine is also worth a mention. It has all the modern features you could want like double precision, global illumination etc but without the bloat and complexity. It's easy to use as much or as little of the engine as you need; in every project you write the main() function. Similar license options to Unity/Unreal.